url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/2812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2812/comments | https://api.github.com/repos/huggingface/transformers/issues/2812/events | https://github.com/huggingface/transformers/issues/2812 | 563,278,618 | MDU6SXNzdWU1NjMyNzg2MTg= | 2,812 | How can I finetune the BERTModel on my own corpus? | {
"login": "Reply1999",
"id": 59358589,
"node_id": "MDQ6VXNlcjU5MzU4NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/59358589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Reply1999",
"html_url": "https://github.com/Reply1999",
"followers_url": "https://api.github.com/users/Reply1999/followers",
"following_url": "https://api.github.com/users/Reply1999/following{/other_user}",
"gists_url": "https://api.github.com/users/Reply1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Reply1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Reply1999/subscriptions",
"organizations_url": "https://api.github.com/users/Reply1999/orgs",
"repos_url": "https://api.github.com/users/Reply1999/repos",
"events_url": "https://api.github.com/users/Reply1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Reply1999/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Take a look at the `resize_embeddings` function and `examples/run_language_modeling.py`.",
"sorry, where is the resize_embeddings function?",
"My bad, it's a method on `PretrainedModel` called `resize_token_embeddings`. There is a call in `run_language_modeling.py. "
] | 1,581 | 1,583 | 1,583 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Thanks for your code!
I want to fine-tune the BERT model on my own corpus which has a smaller vocabulary than the default size of 30522, and my final goal is to obtain a fine-tuned and personalized BERT model which can provide proper word embedding for future top tasks. In short, I need to fine-tune the BERTModel for providing word embedding based on my own corpus.
How can I build a new vocabulary and then fetch the embeddings from the provided pre-trained model, e.g., bert-base-uncased, and then fine-tune the model on my own corpus?
Have you provided functions for building vocabulary and further fine-tuning?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2812/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2811/comments | https://api.github.com/repos/huggingface/transformers/issues/2811/events | https://github.com/huggingface/transformers/issues/2811 | 563,209,271 | MDU6SXNzdWU1NjMyMDkyNzE= | 2,811 | How to use a batch size bigger than zero in Bert Sequence Classification | {
"login": "ayrtondenner",
"id": 13112588,
"node_id": "MDQ6VXNlcjEzMTEyNTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13112588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayrtondenner",
"html_url": "https://github.com/ayrtondenner",
"followers_url": "https://api.github.com/users/ayrtondenner/followers",
"following_url": "https://api.github.com/users/ayrtondenner/following{/other_user}",
"gists_url": "https://api.github.com/users/ayrtondenner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayrtondenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayrtondenner/subscriptions",
"organizations_url": "https://api.github.com/users/ayrtondenner/orgs",
"repos_url": "https://api.github.com/users/ayrtondenner/repos",
"events_url": "https://api.github.com/users/ayrtondenner/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayrtondenner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I answered your question on stack overflow."
] | 1,581 | 1,581 | 1,581 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
[Hugging Face documentation describes](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification) how to do a sequence classification using a Bert model:
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
However, there is only example for batch size 1. How to implement it when we have a list of phrases and want to use a bigger batch size?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60170037/how-to-use-a-batch-size-bigger-than-zero-in-bert-sequence-classification | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2811/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2810/comments | https://api.github.com/repos/huggingface/transformers/issues/2810/events | https://github.com/huggingface/transformers/issues/2810 | 563,125,911 | MDU6SXNzdWU1NjMxMjU5MTE= | 2,810 | How to get longer output for summary? | {
"login": "GraphGrailAi",
"id": 4690353,
"node_id": "MDQ6VXNlcjQ2OTAzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4690353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GraphGrailAi",
"html_url": "https://github.com/GraphGrailAi",
"followers_url": "https://api.github.com/users/GraphGrailAi/followers",
"following_url": "https://api.github.com/users/GraphGrailAi/following{/other_user}",
"gists_url": "https://api.github.com/users/GraphGrailAi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GraphGrailAi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GraphGrailAi/subscriptions",
"organizations_url": "https://api.github.com/users/GraphGrailAi/orgs",
"repos_url": "https://api.github.com/users/GraphGrailAi/repos",
"events_url": "https://api.github.com/users/GraphGrailAi/events{/privacy}",
"received_events_url": "https://api.github.com/users/GraphGrailAi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
},
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | #Question
https://stackoverflow.com/questions/60157959/transformers-summarization-with-python-pytorch-how-to-get-longer-output
Should i train it myself to get summary output longer than used in original training script?
:
python run_summarization.py \
--documents_dir $DATA_PATH \
--summaries_output_dir $SUMMARIES_PATH \ # optional
--no_cuda false \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true \
--compute_rouge true
When i do inference with
--min_length 500 \
--max_length 600 \
I got a good output for 200 tokens, but the rest of the text is
. . . [unused7] [unused7] [unused7] [unused8] [unused4] [unused7] [unused7] [unused4] [unused7] [unused8]. [unused4] [unused7] . [unused4] [unused8] [unused4] [unused8]. [unused4] [unused4] [unused8] [unused4] . . [unused4] [unused6] [unused4] [unused7] [unused6] [unused4] [unused8] [unused5] [unused4] [unused7] [unused4] [unused4] [unused7]. [unused4] [unused6]. [unused4] [unused4] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused6] [unused4] [unused4] [unused4]. [unused4]. [unused5] [unused4] [unused8] [unused7] [unused4] [unused7] [unused9] [unused4] [unused7] [unused4] [unused7] [unused5] [unused4] [unused5] [unused4] [unused6] [unused4]. . . [unused5]. [unused4] [unused4] [unused4] [unused6] [unused5] [unused4] [unused4] [unused6] [unused4] [unused6] [unused4] [unused4] [unused5] [unused4]. [unused5] [unused4] . [unused4] [unused4] [unused8] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused8] [unused4] [unused6] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2810/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2809/comments | https://api.github.com/repos/huggingface/transformers/issues/2809/events | https://github.com/huggingface/transformers/pull/2809 | 563,010,669 | MDExOlB1bGxSZXF1ZXN0MzczNTAwODMw | 2,809 | Fix typo in src/transformers/data/processors/squad.py | {
"login": "whitedelay",
"id": 38174055,
"node_id": "MDQ6VXNlcjM4MTc0MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/38174055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whitedelay",
"html_url": "https://github.com/whitedelay",
"followers_url": "https://api.github.com/users/whitedelay/followers",
"following_url": "https://api.github.com/users/whitedelay/following{/other_user}",
"gists_url": "https://api.github.com/users/whitedelay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whitedelay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whitedelay/subscriptions",
"organizations_url": "https://api.github.com/users/whitedelay/orgs",
"repos_url": "https://api.github.com/users/whitedelay/repos",
"events_url": "https://api.github.com/users/whitedelay/events{/privacy}",
"received_events_url": "https://api.github.com/users/whitedelay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=h1) Report\n> Merging [#2809](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1f5db9a13c8932e02e6e7d599a16dc262b1570bf?src=pr&el=desc) will **decrease** coverage by `30.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2809 +/- ##\n==========================================\n- Coverage 75.02% 44.9% -30.12% \n==========================================\n Files 93 93 \n Lines 15275 15275 \n==========================================\n- Hits 11460 6860 -4600 \n- Misses 3815 8415 +4600\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.24% <ΓΈ> (-0.65%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `0% <0%> (-97.64%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `0% <0%> (-96%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `0% <0%> (-95.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `0% <0%> (-94.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `0% <0%> (-87.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `0% <0%> (-86.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `0% <0%> (-83.83%)` | :arrow_down: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/2809/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=footer). Last update [1f5db9a...5d5447d](https://codecov.io/gh/huggingface/transformers/pull/2809?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Cool, thanks!"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | end end -> and end | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2809/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2809",
"html_url": "https://github.com/huggingface/transformers/pull/2809",
"diff_url": "https://github.com/huggingface/transformers/pull/2809.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2809.patch",
"merged_at": 1581438146000
} |
https://api.github.com/repos/huggingface/transformers/issues/2808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2808/comments | https://api.github.com/repos/huggingface/transformers/issues/2808/events | https://github.com/huggingface/transformers/issues/2808 | 562,990,314 | MDU6SXNzdWU1NjI5OTAzMTQ= | 2,808 | Multiple Choice BERT, SWAG task, failure to test | {
"login": "PhaelIshall",
"id": 13065761,
"node_id": "MDQ6VXNlcjEzMDY1NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13065761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhaelIshall",
"html_url": "https://github.com/PhaelIshall",
"followers_url": "https://api.github.com/users/PhaelIshall/followers",
"following_url": "https://api.github.com/users/PhaelIshall/following{/other_user}",
"gists_url": "https://api.github.com/users/PhaelIshall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhaelIshall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhaelIshall/subscriptions",
"organizations_url": "https://api.github.com/users/PhaelIshall/orgs",
"repos_url": "https://api.github.com/users/PhaelIshall/repos",
"events_url": "https://api.github.com/users/PhaelIshall/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhaelIshall/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834066408,
"node_id": "MDU6TGFiZWwxODM0MDY2NDA4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Multiple%20Choice",
"name": "Ex: Multiple Choice",
"color": "B6FFF8",
"default": false,
"description": ""
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Download SWAG dataset and put it in some directory and set the path by `export SWAG_DIR=path/to/swag/dir`
2. Copy `run_multiple_choice.py` and `utils_multiple_choice.py`
3. Run the code only for testing with the following command
`./run_multiple_choice.py --model_type bert --task_name swag --model_name_or_path bert-base-uncased --do_lower_case --max_seq_length 80 --output_dir models_bert/swag_testing --data_dir $SWAG_DIR --do_test`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
```
Traceback (most recent call last):
File "./run_multiple_choice.py", line 678, in <module>
main()
File "./run_multiple_choice.py", line 669, in main
result = evaluate(args, model, tokenizer, prefix=prefix, test=True)
File "./run_multiple_choice.py", line 248, in evaluate
eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=not test, test=test)
File "./run_multiple_choice.py", line 354, in load_and_cache_examples
examples = processor.get_test_examples(args.data_dir)
File "utils_multiple_choice.py", line 168, in get_test_examples
"For swag testing, the input file does not contain a label column. It can not be tested in current code"
ValueError: For swag testing, the input file does not contain a label column. It can not be tested in current codesetting!
```
In the code it says for testing there is no need for label column, but it doesn't work with or without it. It does not work with the default `test.csv` file (it will be called by default for testing if it is in the directory), but it also does not work with `val.csv` (has label column).
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `Transformers` version: current
- Platform: Linux
- Python version: 3.7.5
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2808/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2807/comments | https://api.github.com/repos/huggingface/transformers/issues/2807/events | https://github.com/huggingface/transformers/pull/2807 | 562,976,299 | MDExOlB1bGxSZXF1ZXN0MzczNDcyMTk5 | 2,807 | get_activation('relu') provides a simple mapping from strings in configs to activation functions | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Happy to do TF in a separate PR. I don't think worth breaking backwards compatibility over this.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=h1) Report\n> Merging [#2807](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/70bbe4b1de298651a9665dc86ba9689bca1e080f?src=pr&el=desc) will **increase** coverage by `29.04%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2807 +/- ##\n===========================================\n+ Coverage 44.91% 73.96% +29.04% \n===========================================\n Files 94 94 \n Lines 15274 15274 \n===========================================\n+ Hits 6860 11297 +4437 \n+ Misses 8414 3977 -4437\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `92.85% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <ΓΈ> (+0.64%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <100%> (+70.86%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.62% <100%> (+97.62%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.37% <100%> (+86.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <100%> (+88.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.32% <100%> (+61.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `83.28% <100%> (+83.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <100%> (+80.2%)` | :arrow_up: |\n| ... and [28 more](https://codecov.io/gh/huggingface/transformers/pull/2807/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=footer). Last update [70bbe4b...6879e76](https://codecov.io/gh/huggingface/transformers/pull/2807?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2807/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2807",
"html_url": "https://github.com/huggingface/transformers/pull/2807",
"diff_url": "https://github.com/huggingface/transformers/pull/2807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2807.patch",
"merged_at": 1581600513000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2806/comments | https://api.github.com/repos/huggingface/transformers/issues/2806/events | https://github.com/huggingface/transformers/issues/2806 | 562,886,631 | MDU6SXNzdWU1NjI4ODY2MzE= | 2,806 | TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased') -> TypeError | {
"login": "rodrigoruiz",
"id": 764094,
"node_id": "MDQ6VXNlcjc2NDA5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/764094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rodrigoruiz",
"html_url": "https://github.com/rodrigoruiz",
"followers_url": "https://api.github.com/users/rodrigoruiz/followers",
"following_url": "https://api.github.com/users/rodrigoruiz/following{/other_user}",
"gists_url": "https://api.github.com/users/rodrigoruiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rodrigoruiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rodrigoruiz/subscriptions",
"organizations_url": "https://api.github.com/users/rodrigoruiz/orgs",
"repos_url": "https://api.github.com/users/rodrigoruiz/repos",
"events_url": "https://api.github.com/users/rodrigoruiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/rodrigoruiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"`BertModel` is the pytorch model, and is therefore only available if you have torch installed. As you correctly said, `TFBertModel` is the TensorFlow equivalent.\r\n\r\nImporting with `from transformers import TFBertModel` raises the above error?",
"Loading the model gives me the error: `TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased')`",
"This model is only available in PyTorch, Neuralmind has not provided a TensorFlow checkpoint for that model. You can see it on the [page](https://huggingface.co/neuralmind/bert-base-portuguese-cased), as it has the tag `PyTorch`, but no `TensorFlow` tag.\r\n\r\nYou can still load it in TensorFlow, but you have to add the `from_pt` flag:\r\n\r\n```py\r\nfrom transformers import TFBertModel\r\n\r\nTFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased', from_pt=True)\r\n```\r\n\r\nThis might require you to have PyTorch installed to do the conversion.",
"Thank you, but with that I get the error `OSError: Loading a TF model from a PyTorch checkpoint is not supported when using a model identifier name.`.\r\nI did install PyTorch.",
"Hi, i too have problem importing bert model error:\r\n```\r\nFile \"chatbot.py\", line 54, in models\r\n bert_model = TFBertModel.from_pretrained('bert-base-uncased')\r\n File \"C:\\Users\\CHENG\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\transformers\\modeling_tf_utils.py\", line 351, in from_pretrained\r\n assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n File \"C:\\Users\\CHENG\\AppData\\Local\\Programs\\Python\\Python37\\lib\\genericpath.py\", line 30, in isfile\r\n st = os.stat(path)\r\nTypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType\r\n```\r\nsometimes it works, sometimes it throws this error, i don't know why, any help will be appreciated!!\r\n",
"@rodrigoruiz, indeed, this functionality was added 12 days ago with https://github.com/huggingface/transformers/commit/961c69776f8a2c95b92407a086848ebca037de5d, so it wouldn't be available on the pip version of 2.4.1. My bad.\r\n\r\nWould you try installing from source with `pip install git+https://github.com/huggingface/transformers` and let me know if it fixes your issue?",
"@LysandreJik Thank you, that worked!",
"> Hi, i too have problem importing bert model error:\r\n> \r\n> ```\r\n> File \"chatbot.py\", line 54, in models\r\n> bert_model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> File \"C:\\Users\\CHENG\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\transformers\\modeling_tf_utils.py\", line 351, in from_pretrained\r\n> assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n> File \"C:\\Users\\CHENG\\AppData\\Local\\Programs\\Python\\Python37\\lib\\genericpath.py\", line 30, in isfile\r\n> st = os.stat(path)\r\n> TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType\r\n> ```\r\n> \r\n> sometimes it works, sometimes it throws this error, i don't know why, any help will be appreciated!!\r\n\r\nI have the same problem with `TFXLMRobertaModel.from_pretrained(\"xlm-roberta-base\")`, did you solve it?",
"Hi @Riccorl my problem somehow just disappear after restarting and upgrading tensorflow to 2.1.0. Iβm not sure how it is solved. Initially, the error pops up randomly, meaning sometimes it works smoothly sometimes not. But I have no error now at all. \r\n\r\nMaybe do a `pip install -U transformers`\r\nAnd then `pip install -U tensorflow-gpu`",
"> Hi @Riccorl my problem somehow just disappear after restarting and upgrading tensorflow to 2.1.0. Iβm not sure how it is solved. Initially, the error pops up randomly, meaning sometimes it works smoothly sometimes not. But I have no error now at all.\r\n> \r\n> Maybe do a `pip install -U transformers`\r\n> And then `pip install -U tensorflow-gpu`\r\n\r\nIt seems like i have problem only with `xlm-roberta` tensorflow models. Other models work. Maybe I should open a new issue",
"I had the same error with this \r\n```\r\n model = TFBertModel.from_pretrained('bert-base-uncased')\r\n File \"/home/cally/.local/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 403, in from_pretrained\r\n assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n File \"/usr/local/lib/python3.7/genericpath.py\", line 30, in isfile\r\n st = os.stat(path)\r\nTypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType\r\n```\r\nthis is my code\r\n```\r\nmodel = TFBertModel.from_pretrained('bert-base-uncased')\r\n```\r\ndid anyone solve it\r\n",
"sometimes it works, sometimes it appears error",
"\r\n\r\n\r\n> Hi @Riccorl my problem somehow just disappear after restarting and upgrading tensorflow to 2.1.0. Iβm not sure how it is solved. Initially, the error pops up randomly, meaning sometimes it works smoothly sometimes not. But I have no error now at all.\r\n> \r\n> Maybe do a `pip install -U transformers`\r\n> And then `pip install -U tensorflow-gpu`\r\n\r\nInstalling above packages solved this issue for me. Its working fine now. Thanks @nixon-nyx ",
"I guess this can now be closed ",
"@daraksha-shirin youβre welcome! Glad that I could help!",
"> I guess this can now be closed\r\n\r\nYep. ",
"> I had the same error with this\r\n> \r\n> ```\r\n> model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> File \"/home/cally/.local/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 403, in from_pretrained\r\n> assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n> File \"/usr/local/lib/python3.7/genericpath.py\", line 30, in isfile\r\n> st = os.stat(path)\r\n> TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType\r\n> ```\r\n> \r\n> this is my code\r\n> \r\n> ```\r\n> model = TFBertModel.from_pretrained('bert-base-uncased')\r\n> ```\r\n> \r\n> did anyone solve it\r\n\r\nI'm still having the exact same issue when fine-tuning model with `TFAutoModel` with following packages version:\r\n- `tensorflow`: 2.2.0\r\n- `transformers`: 3.0.2"
] | 1,581 | 1,595 | 1,587 | NONE | null | I just installed the library on a TensorFlow environment (2.0.0-rc1) and there is no `BertModel` in `transformers`.
Is `TFBertModel` equivalent? If so, then I get the error `TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType` when loading the model with `model = TFBertModel.from_pretrained('neuralmind/bert-base-portuguese-cased')`.
- `transformers` version: 2.4.1
- Platform: Windows 10
- Python version: 3.7.6
- Tensorflow version (GPU?): 2.0.0-rc1 (it automatically uses GPU now)
- Using GPU in script?: No, just importing.
- Using distributed or parallel set-up in script?: No.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2806/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2805/comments | https://api.github.com/repos/huggingface/transformers/issues/2805/events | https://github.com/huggingface/transformers/pull/2805 | 562,866,185 | MDExOlB1bGxSZXF1ZXN0MzczMzg1MjIy | 2,805 | [model_cards] Add new German Europeana BERT models | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=h1) Report\n> Merging [#2805](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ccc04ee6c762183ff2b34b8b85d139f77cbf14?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2805 +/- ##\n=======================================\n Coverage 75.02% 75.02% \n=======================================\n Files 93 93 \n Lines 15275 15275 \n=======================================\n Hits 11460 11460 \n Misses 3815 3815\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=footer). Last update [68ccc04...e1833f7](https://codecov.io/gh/huggingface/transformers/pull/2805?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,581 | 1,581 | 1,581 | COLLABORATOR | null | Hi,
this PR adds the model cards for two new BERT models for Historic German.
The cased and uncased BERT models were trained on a huge corpus: newspapers from [Europeana](http://www.europeana-newspapers.eu/). Time period of these (noisy) OCRed newspapers is 18th - 20th century.
More information can be found [here](https://github.com/dbmdz/berts) and more detailed results on downstream tasks [here](https://github.com/stefan-it/europeana-bert). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2805/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2805",
"html_url": "https://github.com/huggingface/transformers/pull/2805",
"diff_url": "https://github.com/huggingface/transformers/pull/2805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2805.patch",
"merged_at": 1581436179000
} |
https://api.github.com/repos/huggingface/transformers/issues/2804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2804/comments | https://api.github.com/repos/huggingface/transformers/issues/2804/events | https://github.com/huggingface/transformers/pull/2804 | 562,851,167 | MDExOlB1bGxSZXF1ZXN0MzczMzcyNzMx | 2,804 | Fix a few issues regarding the language modeling script | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=h1) Report\n> Merging [#2804](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/539f601be712619dc8c428f0a0b5e8e15f82ac4c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2804 +/- ##\n=======================================\n Coverage 75.02% 75.02% \n=======================================\n Files 93 93 \n Lines 15275 15275 \n=======================================\n Hits 11460 11460 \n Misses 3815 3815\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.78% <0%> (ΓΈ)` | :arrow_up: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2804/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=footer). Last update [539f601...98e2921](https://codecov.io/gh/huggingface/transformers/pull/2804?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,581 | 1,581 | 1,581 | MEMBER | null | The language modeling script currently has a few issues.
- in the line by line dataset, no special tokens are added (that's due to the fact the `batch_encode_plus` has the `add_special_token` flag `False` by default, which is misleading).
- the max length is ill computed in that same dataset, as it doesn't take into account the fact that `encode_plus` is aware of the special tokens and their impact on the sequence length. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2804/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2804",
"html_url": "https://github.com/huggingface/transformers/pull/2804",
"diff_url": "https://github.com/huggingface/transformers/pull/2804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2804.patch",
"merged_at": 1581531795000
} |
https://api.github.com/repos/huggingface/transformers/issues/2803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2803/comments | https://api.github.com/repos/huggingface/transformers/issues/2803/events | https://github.com/huggingface/transformers/issues/2803 | 562,765,943 | MDU6SXNzdWU1NjI3NjU5NDM= | 2,803 | Support DeepSpeed for language modeling finetuning | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
},
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # π Feature request
https://github.com/microsoft/DeepSpeed
This was just released, and given the code flow in `run_language_modeling.py` it seems like it would not be too difficult to drop-in, and it has a permissible license (MIT).
However, given the dependencies and difficulty installing them, it would likely have to be done in a separate file.
## Motivation
 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2803/reactions",
"total_count": 15,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2803/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2802/comments | https://api.github.com/repos/huggingface/transformers/issues/2802/events | https://github.com/huggingface/transformers/pull/2802 | 562,708,992 | MDExOlB1bGxSZXF1ZXN0MzczMjU2NzY3 | 2,802 | FlauBERT lang embeddings only when n_langs > 1 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,581 | 1,581 | 1,581 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2802/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2802",
"html_url": "https://github.com/huggingface/transformers/pull/2802",
"diff_url": "https://github.com/huggingface/transformers/pull/2802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2802.patch",
"merged_at": 1581359045000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2801/comments | https://api.github.com/repos/huggingface/transformers/issues/2801/events | https://github.com/huggingface/transformers/issues/2801 | 562,678,742 | MDU6SXNzdWU1NjI2Nzg3NDI= | 2,801 | Can't load pre-trained Flaubert model | {
"login": "LoicH",
"id": 15996770,
"node_id": "MDQ6VXNlcjE1OTk2Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/15996770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoicH",
"html_url": "https://github.com/LoicH",
"followers_url": "https://api.github.com/users/LoicH/followers",
"following_url": "https://api.github.com/users/LoicH/following{/other_user}",
"gists_url": "https://api.github.com/users/LoicH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoicH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoicH/subscriptions",
"organizations_url": "https://api.github.com/users/LoicH/orgs",
"repos_url": "https://api.github.com/users/LoicH/repos",
"events_url": "https://api.github.com/users/LoicH/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoicH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1802861720,
"node_id": "MDU6TGFiZWwxODAyODYxNzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20CLI",
"name": "Core: CLI",
"color": "FF6426",
"default": false,
"description": ""
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"On the second issue, the CLI is Python 3.6+ only. We'll document this better in the future, cc @LysandreJik ",
"On the first issue, looks like your traceback might be truncated. Did you paste all of it?",
"> \r\n> \r\n> On the first issue, looks like your traceback might be truncated. Did you paste all of it?\r\n\r\nYes indeed I forgot the last lines, don't know why... I edited my original post to include the full traceback: \r\n\r\n> Traceback (most recent call last):\r\n> File \"C:\\Users\\myself\\Documents\\work\\dev\\Classif_Annonces\\venv\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3326, in run_code\r\n> exec(code_obj, self.user_global_ns, self.user_ns)\r\n> File \"<ipython-input-2-05c64572fe39>\", line 2, in <module>\r\n> tokenizer = transformers.FlaubertTokenizer.from_pretrained('flaubert-base-cased')\r\n> File \"C:\\Users\\myself\\Documents\\work\\dev\\Classif_Annonces\\venv\\lib\\site-packages\\transformers-2.4.1-py3.5.egg\\transformers\\tokenization_utils.py\", line 309, in from_pretrained\r\n> return cls._from_pretrained(*inputs, **kwargs)\r\n> File \"C:\\Users\\myself\\Documents\\work\\dev\\Classif_Annonces\\venv\\lib\\site-packages\\transformers-2.4.1-py3.5.egg\\transformers\\tokenization_utils.py\", line 410, in _from_pretrained\r\n> list(cls.vocab_files_names.values()),\r\n> OSError: Model name 'flaubert-base-cased' was not found in tokenizers model name list (flaubert-large-cased, flaubert-base-uncased, flaubert-small-cased, flaubert-base-cased). We assumed 'flaubert-base-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['merges.txt', 'vocab.json'] but couldn't find such vocabulary files at this path or url.\r\n> ",
"I can't replicate this issue (FlaubertTokenizer) in either v2.4.0 or v2.4.1, does it arise when you simply do \r\n\r\n```py\r\nfrom transformers import FlaubertTokenizer\r\ntokenizer= FlaubertTokenizer.from_pretrained(\"flaubert-base-cased\")\r\n```\r\n?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Flaubert
Language I am using the model on (English, Chinese ...): French
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a pre-trained model
2.
3.
I'm following the guide from https://huggingface.co/transformers/model_doc/flaubert.html#flaubertmodel:
```
import transformers
tokenizer = transformers.FlaubertTokenizer.from_pretrained('flaubert-base-cased')
```
```
Traceback (most recent call last):
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-05c64572fe39>", line 2, in <module>
tokenizer = transformers.FlaubertTokenizer.from_pretrained('flaubert-base-cased')
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\tokenization_utils.py", line 309, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\tokenization_utils.py", line 410, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name 'flaubert-base-cased' was not found in tokenizers model name list (flaubert-large-cased, flaubert-base-uncased, flaubert-small-cased, flaubert-base-cased). We assumed 'flaubert-base-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['merges.txt', 'vocab.json'] but couldn't find such vocabulary files at this path or url.
```
## Expected behavior
`tokenizer` should be a `FlaubertTokenizer` object
## Environment info
Well calling `python transformers-cli env` gave me another error:
```
(venv) C:\Users\PLHT09191\Documents\work\dev\Classif_Annonces\venv\Scripts>python transformers-cli env
Traceback (most recent call last):
File "transformers-cli", line 4, in <module>
__import__('pkg_resources').run_script('transformers==2.4.1', 'transformers-cli')
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\setuptools-40.8.0-py3.5.egg\pkg_resources\__init__.py", line 666, in run_script
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\setuptools-40.8.0-py3.5.egg\pkg_resources\__init__.py", line 1446, in run_script
File "c:\users\myself\documents\work\dev\classif_annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\EGG-INFO\scripts\transformers-cli", line 6, in <module>
from transformers.commands.user import UserCommands
File "C:\Users\myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\commands\user.py", line 163
entries: List[os.DirEntry] = list(os.scandir(rel_path))
^
SyntaxError: invalid syntax
```
- `transformers` version: 2.4.1
- Platform: Windows 64 bits
- Python version: Python 3.5.2
- PyTorch version (GPU?): torch.__version__ = 1.4.0+cpu
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2801/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2800/comments | https://api.github.com/repos/huggingface/transformers/issues/2800/events | https://github.com/huggingface/transformers/issues/2800 | 562,639,782 | MDU6SXNzdWU1NjI2Mzk3ODI= | 2,800 | CircleCI doesn't run slow tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"When deploying on CircleCI, it runs the `build_and_test` job, which runs the [following suites](https://github.com/huggingface/transformers/blob/master/.circleci/config.yml#L126-L133).\r\n\r\nThe slow tests are ran by the `run_all_tests_torch_and_tf` suite, which only triggers [weekly](https://github.com/huggingface/transformers/blob/master/.circleci/config.yml#L137). \r\n\r\nThe slow tests are especially slow, and currently fail on CircleCI because the machines can't run for so long. We're exploring options to run them on a specific machine cc @julien-c ",
"Got it, thanks. Can I delete this line https://github.com/huggingface/transformers/blob/81d6841b4be25a164235975e5ebdcf99d7a26633/.circleci/config.yml#L23\r\n\r\nit confused me.",
"If you remove this line the slow tests won't run during the weekly tests though",
"Oh I get it, was missing\r\n`run_all_tests_torch_and_tf` vs `run_tests_torch_and_tf`",
"should we rename `run_all_tests_torch_and_tf` to `run_slow_tests_torch_and_tf`?",
"Well its purpose really is to run all tests, not only the slow tests but the custom tokenizers and soon the doc examples as well, so I feel that the current name is fitting."
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | circle_ci.cfg says `RUN_SLOW: yes`, but all my circleci runs have the slow tests skipped.
Is this expected behavior?
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2800/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2799/comments | https://api.github.com/repos/huggingface/transformers/issues/2799/events | https://github.com/huggingface/transformers/pull/2799 | 562,628,766 | MDExOlB1bGxSZXF1ZXN0MzczMTkwNTU3 | 2,799 | Add model readme for bert-base-german-cased | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=h1) Report\n> Merging [#2799](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92e974196fc35eb826f64808ae82d20c4380e3eb?src=pr&el=desc) will **increase** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2799 +/- ##\n==========================================\n+ Coverage 73.95% 75.03% +1.08% \n==========================================\n Files 93 93 \n Lines 15272 15272 \n==========================================\n+ Hits 11295 11460 +165 \n+ Misses 3977 3812 -165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <0%> (+1.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (+2.2%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <0%> (+2.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (+9.85%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (+81.2%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=footer). Last update [92e9741...5e0a253](https://codecov.io/gh/huggingface/transformers/pull/2799?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Location is correct!\r\n\r\nThere's a small markup issue though (everything is italicized), I'll fix in next commit as it looks like I don't have push access on your fork.\r\n\r\nAlso will add metadata for language (will tag you in the commit)",
"Thanks for the fast merge :)\r\nI couldn't find the issue with italics, but it seems that on the [website](https://huggingface.co/bert-base-german-cased) the unordered lists are not correctly rendered from the markdown. Any advice on how to get them correctly formatted there?",
"Re. the list styling, yes, we'll tweak!"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | Adding a readme for our German BERT model. Not sure if the file location is correct, as the model was added before model hub / user name spaces were created. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2799",
"html_url": "https://github.com/huggingface/transformers/pull/2799",
"diff_url": "https://github.com/huggingface/transformers/pull/2799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2799.patch",
"merged_at": 1581348450000
} |
https://api.github.com/repos/huggingface/transformers/issues/2798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2798/comments | https://api.github.com/repos/huggingface/transformers/issues/2798/events | https://github.com/huggingface/transformers/issues/2798 | 562,614,410 | MDU6SXNzdWU1NjI2MTQ0MTA= | 2,798 | Reduce the CamemBERT dimensions | {
"login": "neuromaancer",
"id": 28112871,
"node_id": "MDQ6VXNlcjI4MTEyODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/28112871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neuromaancer",
"html_url": "https://github.com/neuromaancer",
"followers_url": "https://api.github.com/users/neuromaancer/followers",
"following_url": "https://api.github.com/users/neuromaancer/following{/other_user}",
"gists_url": "https://api.github.com/users/neuromaancer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neuromaancer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neuromaancer/subscriptions",
"organizations_url": "https://api.github.com/users/neuromaancer/orgs",
"repos_url": "https://api.github.com/users/neuromaancer/repos",
"events_url": "https://api.github.com/users/neuromaancer/events{/privacy}",
"received_events_url": "https://api.github.com/users/neuromaancer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Hi @AlafateABULIMITI sounds like a question that's better suited for Stack Overflow. Thanks!"
] | 1,581 | 1,582 | 1,582 | NONE | null | I want to reduce the output dimension by adding a linear layer at the end of the camem model.
code :
```python
from transformers import CamembertTokenizer, CamembertModel
import torch
from torch.nn import Sequential, Linear
tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertModel.from_pretrained('camembert-base')
input_ids = torch.tensor(tokenizer.encode("La pose d'un panneau stop.", add_special_tokens=True)).unsqueeze(0) # Batch size 1
# labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1
model = Sequential(model, Linear(768, 256))
outputs = model(input_ids)
print(input_ids)
print(outputs[1].size())
print(outputs[0].size())
```
I got this :
```shell
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1366 - Output: :math:`(N, *, out\_features)`
1367 """
-> 1368 if input.dim() == 2 and bias is not None:
1369 # fused op is marginally faster
1370 ret = torch.addmm(bias, input, weight.t())
AttributeError: 'tuple' object has no attribute 'dim'
```
Additionally, I want to do a word-level embedding, however, the 768 dimensions is too big from my point.
Thanks for your helps.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2798/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2797/comments | https://api.github.com/repos/huggingface/transformers/issues/2797/events | https://github.com/huggingface/transformers/pull/2797 | 562,583,783 | MDExOlB1bGxSZXF1ZXN0MzczMTUzNjE5 | 2,797 | Add model readme for deepset/roberta-base-squad2 | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=h1) Report\n> Merging [#2797](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92e974196fc35eb826f64808ae82d20c4380e3eb?src=pr&el=desc) will **increase** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2797 +/- ##\n==========================================\n+ Coverage 73.95% 75.03% +1.08% \n==========================================\n Files 93 93 \n Lines 15272 15272 \n==========================================\n+ Hits 11295 11460 +165 \n+ Misses 3977 3812 -165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <0%> (+1.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (+2.2%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <0%> (+2.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (+9.85%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2797/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (+81.2%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=footer). Last update [92e9741...ec005e3](https://codecov.io/gh/huggingface/transformers/pull/2797?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | Adding a model readme for https://huggingface.co/deepset/roberta-base-squad2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2797/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2797",
"html_url": "https://github.com/huggingface/transformers/pull/2797",
"diff_url": "https://github.com/huggingface/transformers/pull/2797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2797.patch",
"merged_at": 1581366109000
} |
https://api.github.com/repos/huggingface/transformers/issues/2796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2796/comments | https://api.github.com/repos/huggingface/transformers/issues/2796/events | https://github.com/huggingface/transformers/issues/2796 | 562,568,356 | MDU6SXNzdWU1NjI1NjgzNTY= | 2,796 | output padding different to zero in hidden layers with attention mask | {
"login": "ShiroKL",
"id": 56442912,
"node_id": "MDQ6VXNlcjU2NDQyOTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/56442912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShiroKL",
"html_url": "https://github.com/ShiroKL",
"followers_url": "https://api.github.com/users/ShiroKL/followers",
"following_url": "https://api.github.com/users/ShiroKL/following{/other_user}",
"gists_url": "https://api.github.com/users/ShiroKL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShiroKL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShiroKL/subscriptions",
"organizations_url": "https://api.github.com/users/ShiroKL/orgs",
"repos_url": "https://api.github.com/users/ShiroKL/repos",
"events_url": "https://api.github.com/users/ShiroKL/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShiroKL/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # π Bug
On the last layer, the token corresponding to the padding does not return 0, even if attention masking is used.
## Information
Model I am using (Bert, XLNet ...):
roberta and roberta XLM
Language I am using the model on (English, Chinese ...):
english
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
looking at the output of the last layer of RobertaModel
## To reproduce
Steps to reproduce the behavior:
1. use some padding in your inputs data
2. create accordingly the attention mask
3.
example of code :
```
def tokenize_sentences_Bert(sentences, tokenizer, maxlen):
tokens = []
lengths = []
for s in sentences:
token = tokenizer.encode(s, add_special_tokens=True, max_length=maxlen)
lengths.append(len(token))
token = token + [tokenizer.pad_token_id] * (maxlen - len(token) )
tokens.append(token)
return tokens, lengths
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaModel.from_pretrained('roberta-base',
output_hidden_states=False,
output_attentions=True)
max_length = 10
sequence = ["I eat a green apple", "I am playing football tomorrow"]
tokens, lengths = tokenize_sentences_Bert(sequence, tokenizer, maxlen=max_length)
lengths = torch.tensor(lengths)
tokens = torch.tensor(tokens)
attention_mask = (torch.arange(max_length).expand(len(lengths), max_length) < lengths.unsqueeze(1)).float()
print(attention_mask)
outputs = model(tokens, attention_mask=attention_mask)
print(outputs[0][:, :, :2]) # last step should return 0 last hidden layers
```
- `transformers` version:
- Platform: ubuntu
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script? no:
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2796/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2795/comments | https://api.github.com/repos/huggingface/transformers/issues/2795/events | https://github.com/huggingface/transformers/issues/2795 | 562,551,863 | MDU6SXNzdWU1NjI1NTE4NjM= | 2,795 | Probably a bug in XLMRobertaTokenizer | {
"login": "zjujh1995",
"id": 32924013,
"node_id": "MDQ6VXNlcjMyOTI0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/32924013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zjujh1995",
"html_url": "https://github.com/zjujh1995",
"followers_url": "https://api.github.com/users/zjujh1995/followers",
"following_url": "https://api.github.com/users/zjujh1995/following{/other_user}",
"gists_url": "https://api.github.com/users/zjujh1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zjujh1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zjujh1995/subscriptions",
"organizations_url": "https://api.github.com/users/zjujh1995/orgs",
"repos_url": "https://api.github.com/users/zjujh1995/repos",
"events_url": "https://api.github.com/users/zjujh1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/zjujh1995/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi, if I am not wrong the pad should be 2 ?\r\nat least the parameters tokenizer.pad_token_id for XLM-R \r\n\r\nEDIT: 2 is eos sorry.",
"Indeed, this is a bug that will be fixed when #3198 is merged. Thanks for letting us know."
] | 1,581 | 1,584 | 1,584 | NONE | null | (Everything goes perfect when I did experiment with MultilingualBert, but seems only the base-model is released.)
When using XLM-R, the according tokenizer (XLMRobertaTokenizer) converts \<unk\> and every OOV token into id = 1. However, 1 should be the number of \<pad\>. (And the tokenizer can convert 1 to \<pad\>, 3 to \<unk\>). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2795/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2794/comments | https://api.github.com/repos/huggingface/transformers/issues/2794/events | https://github.com/huggingface/transformers/issues/2794 | 562,544,518 | MDU6SXNzdWU1NjI1NDQ1MTg= | 2,794 | You must specify an aggregation method to update a MirroredVariable in Replica Context. | {
"login": "iamneerajverma",
"id": 17368001,
"node_id": "MDQ6VXNlcjE3MzY4MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/17368001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamneerajverma",
"html_url": "https://github.com/iamneerajverma",
"followers_url": "https://api.github.com/users/iamneerajverma/followers",
"following_url": "https://api.github.com/users/iamneerajverma/following{/other_user}",
"gists_url": "https://api.github.com/users/iamneerajverma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamneerajverma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamneerajverma/subscriptions",
"organizations_url": "https://api.github.com/users/iamneerajverma/orgs",
"repos_url": "https://api.github.com/users/iamneerajverma/repos",
"events_url": "https://api.github.com/users/iamneerajverma/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamneerajverma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Tried executing the test case: **test_optimization_tf.py**\r\nThe test case also fails when on GPU.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # You must specify an aggregation method to update a MirroredVariable in Replica Context.
##
<ipython-input-28-7cf32baaf070>:52 step_fn *
gradient_accumulator(grads)
/tensorflow-2.1.0/python3.6/tensorflow_core/python/distribute/distribute_lib.py:763 experimental_run_v2
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/transformers/optimization_tf.py:229 __call__ *
accum_gradient.assign_add(gradient)
/tensorflow-2.1.0/python3.6/tensorflow_core/python/distribute/values.py:1124 assign_add
return self._assign_func(f=assign_add_fn, *args, **kwargs)
/tensorflow-2.1.0/python3.6/tensorflow_core/python/distribute/values.py:1108 _assign_func
variable_type="MirroredVariable"))
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
using GPU: Yes
The problem arises when using:
Training on multiple gpus and accumulating gradient as given in run_tf_ner.py
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2794/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2793/comments | https://api.github.com/repos/huggingface/transformers/issues/2793/events | https://github.com/huggingface/transformers/pull/2793 | 562,533,851 | MDExOlB1bGxSZXF1ZXN0MzczMTExNjA0 | 2,793 | Fix circleci cuInit error on Tensorflow >= 2.1.0. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [] | 1,581 | 1,581 | 1,581 | MEMBER | null | Tensorflow 2.1.0 introduce a new dependency model where `pip install tensorflow` would install tf **with GPU support**. Before 2.1.0 it would just install with CPU support only.
CircleCI machines are running without GPU hardware so, at initialisation, TensorFlow tests are looking for NVidia driver version but fails as their is no NVidia Driver running.
This PR introduces an extra (optional) dependency **tf-cpu** which explicitly requires **tensorflow-cpu** and makes sure it `pip install tf-cpu` instead of `pip install tf` while running unit tests.
It should remove the following error on CircleCI:
```bash
tests/test_modeling_tf_bert.py::TFBertModelTest::test_attention_outputs 2020-02-10 11:14:08.280770: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-02-10 11:14:08.280808: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)
2020-02-10 11:14:08.280837: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (40403e139ccb): /proc/driver/nvidia/version does not exist
2020-02-10 11:14:08.281093: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
```
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2793",
"html_url": "https://github.com/huggingface/transformers/pull/2793",
"diff_url": "https://github.com/huggingface/transformers/pull/2793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2793.patch",
"merged_at": 1581418123000
} |
https://api.github.com/repos/huggingface/transformers/issues/2792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2792/comments | https://api.github.com/repos/huggingface/transformers/issues/2792/events | https://github.com/huggingface/transformers/issues/2792 | 562,430,697 | MDU6SXNzdWU1NjI0MzA2OTc= | 2,792 | tiny issue with distilbertconfig docs | {
"login": "waalge",
"id": 47293755,
"node_id": "MDQ6VXNlcjQ3MjkzNzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47293755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waalge",
"html_url": "https://github.com/waalge",
"followers_url": "https://api.github.com/users/waalge/followers",
"following_url": "https://api.github.com/users/waalge/following{/other_user}",
"gists_url": "https://api.github.com/users/waalge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waalge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waalge/subscriptions",
"organizations_url": "https://api.github.com/users/waalge/orgs",
"repos_url": "https://api.github.com/users/waalge/repos",
"events_url": "https://api.github.com/users/waalge/events{/privacy}",
"received_events_url": "https://api.github.com/users/waalge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"You're correct! I updated it with 539f601. Thanks."
] | 1,581 | 1,581 | 1,581 | NONE | null | # π Bug (barely)
Discrepancy in variable names between docs and code:
I presume [``intermediate_size``](https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/configuration_distilbert.py#L63) refers to [``hidden_dim``](https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/configuration_distilbert.py#L109)?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2792/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2791/comments | https://api.github.com/repos/huggingface/transformers/issues/2791/events | https://github.com/huggingface/transformers/pull/2791 | 562,279,825 | MDExOlB1bGxSZXF1ZXN0MzcyOTA0MDUx | 2,791 | Create BERT-of-Theseus model card | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2791",
"html_url": "https://github.com/huggingface/transformers/pull/2791",
"diff_url": "https://github.com/huggingface/transformers/pull/2791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2791.patch",
"merged_at": 1581346721000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2790/comments | https://api.github.com/repos/huggingface/transformers/issues/2790/events | https://github.com/huggingface/transformers/issues/2790 | 562,223,245 | MDU6SXNzdWU1NjIyMjMyNDU= | 2,790 | Is there any way that I can directly feed the hidden output of the embedding layer into each of the transformer's layer? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"How about manually handling the embeddings and attention layers?\r\n\r\n```py\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n\r\nsequence = tokenizer.encode(\"Try this out\", return_tensors=\"pt\")\r\nembeds = model.get_input_embeddings()(seq)\r\nfirst_layer_output, first_layer_attentions = model.transformer.h[0](embeds)\r\n```",
"Hello,\r\n\r\nThank again for your reply!\r\n\r\n1. Just to make sure that I am understanding this correctly, is the line\r\n```python\r\nmodel.transformer.h[0]\r\n```\r\nused to access the first layer of the transformer? so that I can access the second layer, third layer, etc., with ```model.transformer.h[1], model.transformer.h[2]``` and so on?\r\n\r\n2. To access the output head of the transformer, do I simply do:\r\n```python\r\nmodel.transformer.h[last index]\r\n````\r\n?\r\n\r\nThank you!",
"Yes, in GPT-2 the layers can be accessed via the `h` attribute. You're correct in your assumption regarding accessing the second and third layers.\r\n\r\nThis gives you the output of the MLP, which is of dimension `(batch_size, sequence_length, hidden_dim)`.",
"Hello,\r\n\r\nThank you for your reply.\r\n\r\nI am having some trouble understanding the MLP function, which is found [here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L200).\r\n\r\nQ1. For MLP, why are we setting the n_state to be equal to 3072, which is 4 * n_embd?\r\nQ2. Below is the forward function for the MLP class:\r\n```python\r\n def forward(self, x):\r\n h = self.act(self.c_fc(x))\r\n h2 = self.c_proj(h)\r\n return self.dropout(h2)\r\n```\r\nin the forward function above, what exactly do the lines ``` h = self.act(self.c_fc(x))``` and ``` h2 = self.c_proj(h)``` do?\r\n\r\nThank you,",
"> Yes, in GPT-2 the layers can be accessed via the `h` attribute. You're correct in your assumption regarding accessing the second and third layers.\r\n> \r\n> This gives you the output of the MLP, which is of dimension `(batch_size, sequence_length, hidden_dim)`.\r\n\r\nHow would you feed input directly into a particular Keras Bert layer? Is there a way to automatically feed inputs at one layer, and have the rest be processed starting at that layer?\r\n\r\nPurpose: I would like to feed the hidden states of one transformer, into another, so I would need to bypass the inputID->embedding layer. \r\n\r\nI did some tinkering and tried this \r\n\r\n```\r\ntestt = tf.random.uniform([3, 5,768], minval=-1, maxval=1, dtype=tf.dtypes.float32, seed=None, name=None)\r\nmodel.layers[0].encoder.layer[3]((testt, None, None))\r\n```\r\n\r\nSeems promising, since output shapes are (3, 5, 768). \r\n\r\nEdit:\r\n\r\nMaybe I can create a new model from these individual layers. \r\n\r\n```\r\ntestt = tf.random.uniform([3, 5,768], minval=-1, maxval=1, dtype=tf.dtypes.float32\r\n\r\ndef get_new_model():\r\n inputHiddenVals = tf.keras.Input(shape=[None, 768], dtype=tf.float32, name='input_Q',\r\n batch_size=None) \r\n\r\n hidden1 = model.layers[0].encoder.layer[3]((inputHiddenVals, None, None))\r\n hidden2 = model.layers[0].encoder.layer[4]((hidden1[0], None, None))\r\n hidden3 = model.layers[0].encoder.layer[5]((hidden2[0], None, None))\r\n modelNew = tf.keras.Model(inputs=inputHiddenVals, outputs=hidden3)\r\n return modelNew\r\n\r\nnModel = get_new_model()\r\nnModel(testt)\r\n\r\n```\r\nSeems to work\r\n",
"Update, doesn't seem to work. The copied layers have parameters missing. \r\n\r\n```\r\nfrom transformers import TFBertModel, AutoModel, TFRobertaModel\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\ntf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\r\n\r\nfrom tensorflow import keras\r\nfrom tensorflow.keras import layers\r\nimport numpy as np\r\nimport os\r\nfrom copy import deepcopy\r\n\r\nlogger = tf.get_logger()\r\nlogger.info(tf.__version__)\r\n\r\ndef get_mini_models():\r\n tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)\r\n\r\n layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])\r\n layer10 = deepcopy(tempModel.layers[0].encoder.layer[9])\r\n\r\n inputHiddenVals = tf.keras.Input(shape=[None, 768], dtype=tf.float32, name='input_Q',\r\n batch_size=None) \r\n\r\n hidden1 = layer9((inputHiddenVals, None, None), training=True)\r\n hidden2 = layer10((hidden1[0], None, None), training=True)\r\n modelNew = tf.keras.Model(inputs=inputHiddenVals, outputs=hidden2)\r\n\r\n del tempModel\r\n\r\n return modelNew\r\n\r\[email protected]\r\ndef loss_fn(_, probs):\r\n bs = tf.shape(probs)[0]\r\n labels = tf.eye(bs, bs)\r\n return tf.losses.categorical_crossentropy(labels,\r\n probs,\r\n from_logits=True)\r\n\r\nmodel = get_mini_models()\r\n# model.layers[2].trainable = False\r\nmodel.compile(loss=loss_fn,\r\n optimizer=tfa.optimizers.AdamW(weight_decay=1e-4, learning_rate=1e-5, \r\n epsilon=1e-06))\r\n\r\ntempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)\r\nlayer9 = deepcopy(tempModel.layers[0].encoder.layer[8])\r\n\r\nfor i, var in enumerate(model.weights):\r\n print(model.weights[i].name)\r\n```\r\n\r\n> tf_roberta_model/roberta/encoder/layer_._8/attention/self/query/kernel:0\r\n> tf_roberta_model/roberta/encoder/layer_._8/attention/self/query/bias:0\r\n> tf_roberta_model/roberta/encoder/layer_._8/attention/self/key/kernel:0\r\n> tf_roberta_model/roberta/encoder/layer_._8/attention/self/key/bias:0\r\n> tf_roberta_model/roberta/encoder/layer_._8/attention/self/value/kernel:0\r\n> tf_roberta_model/roberta/encoder/layer_._8/attention/self/value/bias:0\r\n\r\nIt's missing a layer, and not even all the weights for the first layer were transferred\r\n\r\n```\r\nfor i, var in enumerate(layer9.weights):\r\n print(layer9.weights[i].name)\r\n```\r\n\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/query/kernel:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/query/bias:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/key/kernel:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/key/bias:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/value/kernel:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/value/bias:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/dense/kernel:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/dense/bias:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/LayerNorm/gamma:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/LayerNorm/beta:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/intermediate/dense/kernel:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/intermediate/dense/bias:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/output/dense/kernel:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/output/dense/bias:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/output/LayerNorm/gamma:0\r\n> tf_roberta_model_1/roberta/encoder/layer_._8/output/LayerNorm/beta:0\r\n\r\nHere's a colab notebook if you want to play around with it\r\n\r\nhttps://colab.research.google.com/drive/1XoESTWyo4qr4uApIai7Ac4tUDAeLDEI-?usp=sharing"
] | 1,581 | 1,592 | 1,582 | NONE | null | Hello,
For an original sequence ```X``` that has length of ```n```, I am interested in feeding the embedding of the original sequence ```X``` (```E```) as an input to the self-attention block of each layer of ```GPT2LMHeadModels``` (here, layer = self-attention block + feedforward block), and examine the layer output generated by ```E```.
Is there any way that I can carry out this task with HuggingFace ```GPT2LMHeadsModel``` transformers?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2790/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2789/comments | https://api.github.com/repos/huggingface/transformers/issues/2789/events | https://github.com/huggingface/transformers/issues/2789 | 562,220,249 | MDU6SXNzdWU1NjIyMjAyNDk= | 2,789 | Is there any way that I can extract the hidden output from the self-attention layer? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"That would be the attentions, which you can output by specifying `output_attentions=True` in your configuration object.",
"Hello,\r\n\r\nThank you very much for your reply.\r\n\r\nWhat I want to obtain though, is not the individual attention weights themselves but rather the final product of the self-attention layer at each head (the transformed embeddings that the self-attention layer produces, before they go into the feedforward layer for final processing).\r\n\r\nIs there any way that I can get this final product of the self-attention layer at each head?\r\n\r\nThank you,",
"You mean you want to obtain the result after the softmax multiplied by the value vector?",
"Hello,\r\n\r\nI would like to obtain the result that is obtained after the sum of (value) * (softmax) got multiplied by the matrix H (i.e. the final output embedding of the self-attention layer from a single head)\r\n\r\nThank you,",
"Then that is literally the attentions I mentioned earlier, see in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L163-L164).",
"Hello,\r\n\r\nso the output of a single attention-head, which is the thing I want to extract, can be formulated as the following:\r\n\r\nO = AE(W^V)H\r\n\r\nwhere\r\n\r\nO = output of a single attention-head\r\nA = matrix that stores attention weights for all tokens in sequence\r\nE = matrix that stores the embeddings of all tokens in a sequence\r\nW^V = matrix that we multiply with E to generate the value vector of all tokens of the sequence\r\nH = projection matrix that is used to generate the final product of a single attention-head\r\n\r\nIf I am not mistaken, ```attention``` gives out the matrix A...\r\nbut what I am looking to get is the output O.....\r\n\r\nIs there anyway that I can get the output O? or does ```attention``` give out the output O, like you described before?\r\n\r\nThank you and sorry for the long question, your help is much appreciated.\r\n\r\n",
"To be even more clear, I just want the output of each head within the layers of transformer. Is there any way that I can get the output of each individual head?\r\n\r\nThank you,",
"Hello, if I can't get the output of individual attention-head explicitly, is there any way that I can retrieve the matrix H, where H is from the formula below: \r\n\r\nO = AE(W^V)H\r\n\r\nO = output of a single attention-head\r\nA = matrix that stores attention weights for all tokens in sequence\r\nE = matrix that stores the embeddings of all tokens in a sequence\r\nW^V = matrix that we multiply with E to generate the value vector of all tokens of the sequence\r\nH = projection matrix that is used to generate the final product of a single attention-head\r\n\r\nThank you,",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@h56cho \r\n\r\nHello\r\n\r\nI also want to know if I can get such hidden outputs\r\nDo you have any progress with it?\r\n\r\nThank you in advance"
] | 1,581 | 1,665 | 1,588 | NONE | null | Hello,
From my understanding, for the ```GPT2LMHeadModel```, the output ```past``` allows me to retrieve the key and value vectors that are used in the self-attention block (which is prior to the feedforward block).
Is there any way I can extract the output of the self-attention block **at a particular head of a single layer** of ```GPT2LMHeadModel``` (if I am understanding this correctly, the output ```hidden_states``` only returns the output after the input had gone into the feedforward block... but what I want is to extract the output from the self-attention block, which happens before the feedforward block).
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2788/comments | https://api.github.com/repos/huggingface/transformers/issues/2788/events | https://github.com/huggingface/transformers/issues/2788 | 562,142,646 | MDU6SXNzdWU1NjIxNDI2NDY= | 2,788 | SQuAD preprocessing not working for roberta (wrong p_mask) | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834052333,
"node_id": "MDU6TGFiZWwxODM0MDUyMzMz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Question%20Answering",
"name": "Ex: Question Answering",
"color": "86FFCF",
"default": false,
"description": ""
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think I have a problem that is related regarding training/evaluation using run_squad.py. \r\n\r\nI wanted to train a roberta model on my own Q&A dataset mixed with the SQuAD dataset by running:\r\n\r\n`python ./examples/run_squad.py --output_dir=/home/jupyter/sec_roberta/roberta-base-mixed-quad --model_type=roberta --model_name_or_path=roberta-large --do_train --train_file=../sec_roberta/financial_and_squad2_train.json --do_eval --predict_file=../sec_roberta/financial_and_squad2_dev.json --learning_rate=1.5e-5 --num_train_epochs=2 --max_seq_length 384 --doc_stride 128 --overwrite_output_dir --per_gpu_train_batch_size=6 --per_gpu_eval_batch_size=6 --warmup_steps 500 --weight_decay 0.01 --version_2_with_negative`\r\n\r\nI ran into this error:\r\n```\r\n02/12/2020 08:22:38 - INFO - __main__ - Creating features from dataset file at .\r\n--\r\n0%\\| \\| 0/542 [00:00<?, ?it/s]\r\nTraceback (most recent call last): File \"./examples/run_squad.py\", line 853, in <module> main() File \"./examples/run_squad.py\", line 791, in main\r\ntrain_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)\r\nFile \"./examples/run_squad.py\", line 474, in load_and_cache_examples\r\nexamples = processor.get_train_examples(args.data_dir, filename=args.train_file)\r\nFile \"/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 501, in get_train_examples\r\nreturn self._create_examples(input_data, \"train\")\r\nFile \"/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 559, in _create_examples\r\nanswers=answers,\r\nFile \"/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 633, in __init__\r\nself.start_position = char_to_word_offset[start_position_character]\r\nIndexError: list index out of range\r\n```\r\n\r\nI tested my dataset on roberta-base and it works, so I don't necessarily think my dataset is the issue.\r\n\r\nAlso, I ran the same code using the SQuAD 2.0 dataset on roberta large and also on a lm-finetuned version of roberta large and both work, so this is all very mysterious to me. \r\n\r\nI thought it could be related.",
"Update: a fresh install of transformers fixed it for me...\r\ni run into a similar error when trying to use the run_squad.py example to train roberta-large on Squad 2.0\r\nwhen i run\r\n`export DATA_DIR=./data\r\npython ./transformers/examples/run_squad.py \\\r\n--model_type roberta \\\r\n--model_name_or_path roberta-large \\\r\n--do_train \\\r\n--do_eval \\\r\n--version_2_with_negative \\\r\n--train_file $DATA_DIR/squad2/train-v2.0.json \\\r\n--predict_file $DATA_DIR/squad2/dev-v2.0.json \\\r\n--per_gpu_eval_batch_size=6 \\\r\n--per_gpu_train_batch_size=6 \\\r\n--learning_rate 3e-5 \\\r\n--num_train_epochs 2.0 \\\r\n--overwrite_output_dir \\\r\n--overwrite_cache \\\r\n--max_seq_length 384 \\\r\n--doc_stride 128 \\\r\n--save_steps 100000 \\\r\n--output_dir ./roberta_squad/`\r\n\r\ni get the following error:\r\n> Traceback (most recent call last):\r\n File \"/opt/anaconda3/lib/python3.7/multiprocessing/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/opt/anaconda3/lib/python3.7/multiprocessing/pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"/home/joshua_wagner/.local/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 198, in\r\n squad_convert_example_to_features\r\n p_mask = np.array(span[\"token_type_ids\"])\r\nKeyError: 'token_type_ids'\r\n\r\nEnvironment:\r\n- Debian GNU/Linux 9.11\r\n- Python 3.7\r\n- PyTorch 1.4.0",
"same error as @joshuawagner93 ",
"@joshuawagner93 @HenrykBorzymowski, this issue should have been patched with #3439. Could you install the latest release and let me know if it fixes your issue?",
"@LysandreJik works perfectly fine! Thx ",
"@LysandreJik reinstall fixed the issue, thank you",
"@LysandreJik Unfortunately, we still face the same issue when we try to use roberta in the pipeline for inference. #3439 didn't seem to help for this. ",
"Hi @tholor, indeed, it seems I thought this issue was resolved when it really wasn't. I just opened #4049 which should fix the issue.",
"Awesome, thanks for working on this @LysandreJik!",
"@tholor, the PR should be merged soon, thank you for your patience!",
"Great, thank you! Looking forward to it :)"
] | 1,581 | 1,589 | 1,589 | CONTRIBUTOR | null | **Description**
The pipeline for QA crashes for roberta models.
It's loading the model and tokenizer correctly, but the SQuAD preprocessing produces a wrong `p_mask` leading to no possible prediction and the error message below.
The observed `p_mask` for a roberta model is
```[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...] ```
while it should only mask the question tokens like this
``` [0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, ...]```
I think the deeper root cause here is that roberta's `token_type_ids` returned from `encode_plus` are now all zeros (introduced in https://github.com/huggingface/transformers/pull/2432) and the creation of `p_mask` in `squad_convert_example_to_features` relies on this information:
https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/data/processors/squad.py#L189-L202
Haven't checked yet, but this might also affect training/eval if `p_mask` is used there.
**How to reproduce?**
```
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
res = nlp({
'question': 'What is roberta?',
'context': 'Roberta is a language model that was trained for a longer time, on more data, without NSP'
})
```
results in
```
File "/home/mp/deepset/dev/transformers/src/transformers/pipelines.py", line 847, in __call__
for s, e, score in zip(starts, ends, scores)
File "/home/mp/deepset/dev/transformers/src/transformers/pipelines.py", line 847, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
```
**Environment**
- Ubuntu 18.04
- Python 3.7.6
- PyTorch 1.3.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2788/reactions",
"total_count": 8,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2787/comments | https://api.github.com/repos/huggingface/transformers/issues/2787/events | https://github.com/huggingface/transformers/issues/2787 | 562,124,488 | MDU6SXNzdWU1NjIxMjQ0ODg= | 2,787 | Distillation code loss functions | {
"login": "snaik2016",
"id": 18183245,
"node_id": "MDQ6VXNlcjE4MTgzMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/18183245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snaik2016",
"html_url": "https://github.com/snaik2016",
"followers_url": "https://api.github.com/users/snaik2016/followers",
"following_url": "https://api.github.com/users/snaik2016/following{/other_user}",
"gists_url": "https://api.github.com/users/snaik2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snaik2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snaik2016/subscriptions",
"organizations_url": "https://api.github.com/users/snaik2016/orgs",
"repos_url": "https://api.github.com/users/snaik2016/repos",
"events_url": "https://api.github.com/users/snaik2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/snaik2016/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
},
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
}
] | closed | false | null | [] | [
"Hello @snaik2016,\r\nThe part of code you're referring to is not a distillation loss. It's the \"classic\" causal language modeling loss.\r\nVictor",
"Not referring to the \"distillation loss\" just the part of the code where loss is computed in distillation code. The exact same quantity is return by model output when labels are passed.",
"Oh yes, you are right, this could be factorized in.\r\nJust note that you have to be careful with the `ignore_index` and make sure it's coherent with your processing (if I remember correctly, at one point, not all the models were using the same `ignore_index` in the loss computation).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # β Questions & Help
Why compute cross entropy loss from the hard labels in distillation code?
if self.alpha_clm > 0.0:
shift_logits = s_logits[..., :-1, :].contiguous()
shift_labels = lm_labels[..., 1:].contiguous()
loss_clm = self.lm_loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
loss += self.alpha_clm * loss_clm
The model outputs loss when passed with the labels.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2786/comments | https://api.github.com/repos/huggingface/transformers/issues/2786/events | https://github.com/huggingface/transformers/issues/2786 | 562,085,633 | MDU6SXNzdWU1NjIwODU2MzM= | 2,786 | SequenceSummary: config.summary_activation = 'relu' would be ignored | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think both approaches are reasonable. @LysandreJik @thomwolf?",
"I agree with both approaches as well. The second one would probably be the most useful.",
"Yes, like Lysandre"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | This isn't a bug, but merely an unintuitive argument name.
`summary_activation` sounds like it can be general, e.g. "relu" or "gelu" or something, but if its not "tanh" its ignored.
Since I assume it's annoying to go through all the configs and rename a field to use_tanh=True, I propose that we raise if summary_activation is a string thats not tanh, instead of silently just using no activation.
Another approach could integrate an `ACT2FN` dictionary (see https://github.com/huggingface/transformers/issues/1347)
to actually support the other activation functions.
Happy to do either approach if others think it would be useful.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2785/comments | https://api.github.com/repos/huggingface/transformers/issues/2785/events | https://github.com/huggingface/transformers/pull/2785 | 562,081,084 | MDExOlB1bGxSZXF1ZXN0MzcyNzYwNzg1 | 2,785 | Create README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=h1) Report\n> Merging [#2785](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/520e7f211926e07b2059bc8e21b668db4372e4db?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2785 +/- ##\n=======================================\n Coverage 75.13% 75.13% \n=======================================\n Files 93 93 \n Lines 15249 15249 \n=======================================\n Hits 11457 11457 \n Misses 3792 3792\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=footer). Last update [520e7f2...48a103f](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Just used code fences for consistency with other model cards.\r\n\r\nThanks for sharing!"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | Albert xxlarge version 1 language model fine-tuned on SQuAD2.0 with the following results:
```
{'exact': 85.65653162637918,
'f1': 89.260458954177,
'total': 11873,
'HasAns_exact': 82.6417004048583,
'HasAns_f1': 89.85989020967376,
'HasAns_total': 5928,
'NoAns_exact': 88.66274179983179,
'NoAns_f1': 88.66274179983179,
'NoAns_total': 5945,
'best_exact': 85.65653162637918,
'best_exact_thresh': 0.0,
'best_f1': 89.2604589541768,
'best_f1_thresh': 0.0}
```
with script:
```
python -m torch.distributed.launch --nproc_per_node=2 ${RUN_SQUAD_DIR}/run_squad.py \
--model_type albert \
--model_name_or_path albert-xxlarge-v1 \
--do_train \
--train_file ${SQUAD_DIR}/train-v2.0.json \
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
--version_2_with_negative \
--num_train_epochs 3 \
--max_steps 8144 \
--warmup_steps 814 \
--do_lower_case \
--learning_rate 3e-5 \
--max_seq_length 512 \
--doc_stride 128 \
--save_steps 2000 \
--per_gpu_train_batch_size 1 \
--gradient_accumulation_steps 24 \
--output_dir ${MODEL_PATH}
CUDA_VISIBLE_DEVICES=0 python ${RUN_SQUAD_DIR}/run_squad.py \
--model_type albert \
--model_name_or_path ${MODEL_PATH} \
--do_eval \
--train_file ${SQUAD_DIR}/train-v2.0.json \
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--max_seq_length 512 \
--per_gpu_eval_batch_size 48 \
--output_dir ${MODEL_PATH}
```
using the following system & software:
```
OS/Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
GPU/CPU: 2 x NVIDIA 1080Ti / Intel i7-8700
Transformers: 2.3.0
PyTorch: 1.4.0
TensorFlow: 2.1.0
Python: 3.7.6
```
Inferencing/prediction works with the current Transformers v2.4.1
Access this `albert_xxlargev1_sqd2_512` fine-tuned model with "tried & true" code:
```
config_class, model_class, tokenizer_class = \
AlbertConfig, AlbertForQuestionAnswering, AlbertTokenizer
model_name_or_path = "ahotrod/albert_xxlargev1_squad2_512"
config = config_class.from_pretrained(model_name_or_path)
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, do_lower_case=True)
model = model_class.from_pretrained(model_name_or_path, config=config)
```
or the AutoModels (AutoConfig, AutoTokenizer & AutoModel) should also work, however I
have yet to use them in my app & confirm:
```
from transformers import AutoConfig, AutoTokenizer, AutoModel
model_name_or_path = "ahotrod/albert_xxlargev1_squad2_512"
config = AutoConfig.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, do_lower_case=True)
model = AutoModel.from_pretrained(model_name_or_path, config=config)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2785/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2785",
"html_url": "https://github.com/huggingface/transformers/pull/2785",
"diff_url": "https://github.com/huggingface/transformers/pull/2785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2785.patch",
"merged_at": 1581373680000
} |
https://api.github.com/repos/huggingface/transformers/issues/2784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2784/comments | https://api.github.com/repos/huggingface/transformers/issues/2784/events | https://github.com/huggingface/transformers/issues/2784 | 561,999,599 | MDU6SXNzdWU1NjE5OTk1OTk= | 2,784 | ERROR:CUDA out of memory when using GPT2 tour | {
"login": "papermannnn",
"id": 60811781,
"node_id": "MDQ6VXNlcjYwODExNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/60811781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papermannnn",
"html_url": "https://github.com/papermannnn",
"followers_url": "https://api.github.com/users/papermannnn/followers",
"following_url": "https://api.github.com/users/papermannnn/following{/other_user}",
"gists_url": "https://api.github.com/users/papermannnn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papermannnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papermannnn/subscriptions",
"organizations_url": "https://api.github.com/users/papermannnn/orgs",
"repos_url": "https://api.github.com/users/papermannnn/repos",
"events_url": "https://api.github.com/users/papermannnn/events{/privacy}",
"received_events_url": "https://api.github.com/users/papermannnn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've already tried to change batch_size to 1. This doesn't seem to be effective"
] | 1,581 | 1,581 | 1,581 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
I follow the tour in the program.Everything is OK . When i start the train,an error occurred.
RuntimeError:
CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 6.00 GiB total capacity; 4.44 GiB already allocated; 3.06 MiB free; 4.57 GiB reserved in total by PyTorch)
I'm using GTX2060(6GB) i'd like to know whether this GPU is quite qualified for the work.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2784/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2783/comments | https://api.github.com/repos/huggingface/transformers/issues/2783/events | https://github.com/huggingface/transformers/issues/2783 | 561,998,838 | MDU6SXNzdWU1NjE5OTg4Mzg= | 2,783 | Features proposals to simplify training Tensorflow model | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"I think that we are trying to keep model logic and training logic very separate. So if you want to try to make the tf **examples** (where training happens) simpler, I'd recommend sending a PR that does that for `examples/run_tf_glue.py` or `example_run_tf_ner.py`, or even adding a new Tensorflow example for a task you care about.",
"Cool, thanks a lot for your reply!!\r\n\r\nAll these features are not only focus on Tensorflow, I have written Tensorflow because it is the framework I know, but I'm sure we can totally apply them on the Pytorch part as well.\r\n\r\nI think I haven't been clear enough and this is my fault, sorry. What I meant is two kind of features:\r\n\r\nThe point 1. is specific to the [training part of the core pipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/commands/train.py). I also do agree that modifying the config file can be confusing, maybe to create a separate config file specifically for training, I don't know, I'm still open to suggestions. My thought is to have some metadata on the training itself in order to be able to easily reproduce it without giving yourself the values to the network as parameters but just by sharing a file that we can upload to the models hubstore.\r\n\r\nThe point 2 is more like some utilities to simplify how to handle models.\r\n\r\nThe point 3 is to have something similar to the Pytorch models where the loss is directly computed in the forward method, I was thinking it could be a good idea to have the same facility for Tensorflow.\r\n\r\nI have already started to work on 2 and 3 to see how it can be, and the pros/cons on the existing examples. I will certainly do a PR later this week or next week to see if you have any review on it.\r\n\r\n(What I gonna say below is just my own opinion)\r\n \r\nI'm suggesting all this because when I talk with most of my colleagues or friends (that are not very familiar with Deep Learning) they don't have the knowledge to create the NER example either in TF or in Pytorch, but would like to train a NER model for their work/project, same thing for simple text classification and they don't want to be bored by writing all this very technical code or to know the meaning of each parameter of the scripts. And in companies I think that more and more people want to train their model without having any ML knowledge.",
"Thanks, and I agree very much with your vision! Looking forward to your PR!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,590 | 1,590 | CONTRIBUTOR | null | # π Feature request
Hello,
On my side I thought to implement some features to simplify the way we can train Tensorflow models (I think it can certainly be adapted to Pytorch as well), and I wanted to know if it might be useful for you. Here a non exhaustive list of features I have in mind:
1. Augment the tranining pipeline with some useful functions such as:
- An [LR finder](https://docs.fast.ai/callbacks.lr_finder.html) that will try to find the best LR for a specific dataset
- [Approach to help to set better hyperparameters](https://arxiv.org/abs/1803.09820)
- [Cyclical LR during training](https://arxiv.org/abs/1506.01186)
- Augment the config.json file of each model with specific training parameters (epochs, LR, batch size, GPUs, etc...) in order to better reproduce a specific training.
2. Modify a bit the `TFPretrainedModel` class in order to better handle:
- multiple GPU training
- Custom training loop
- Custom optimizer creation
- Gradient Accucmulation
- Add a checkpoint manager
- Handle Tensorboard
3. Modify few model classes to add custom loss computation such as for the NER as I have done in the TF example.
I don't know if it sounds interesting for you @thomwolf, @julien-c and @LysandreJik ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2782/comments | https://api.github.com/repos/huggingface/transformers/issues/2782/events | https://github.com/huggingface/transformers/issues/2782 | 561,991,279 | MDU6SXNzdWU1NjE5OTEyNzk= | 2,782 | RoBERTaMultiChoice does not work with `roberta-large` | {
"login": "yuchenlin",
"id": 10104354,
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenlin",
"html_url": "https://github.com/yuchenlin",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@yuchenlin May I ask if you figured out the bug?"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (Bert, XLNet ...):
**roberta-large**
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [* ] the official example scripts:
https://github.com/huggingface/transformers/tree/6c1b23554f8bb5b5e1f6c80969acab764c755678/examples#multiple-choice
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ *] an official GLUE/SQUaD task: **SWAG**
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
just follow the example code of running swag dataset but with `robert-large` instead of `roberta-base` (which works well)
```
export SWAG_DIR=~/swagaf-master/data/
python ./examples/run_multiple_choice.py \
--model_type roberta \
--task_name swag \
--model_name_or_path roberta-large \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir tmp/swag_base \
--per_gpu_eval_batch_size=16 \
--per_gpu_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output
```
And it will say:
02/08/2020 00:46:23 - INFO - transformers.modeling_utils - **Weights of RobertaForMultipleChoice not initialized from pretrained model:** ['classifier.weight', 'classifier.bias']
02/08/2020 00:46:23 - INFO - transformers.modeling_utils - **Weights from pretrained model not used in RobertaForMultipleChoice:** ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
Consequently, the script is to learn a model from the scratch instead of fine-tuning pre-trained roberta-large.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should load the pre-trained weights of roberta-large model.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2782/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2781/comments | https://api.github.com/repos/huggingface/transformers/issues/2781/events | https://github.com/huggingface/transformers/issues/2781 | 561,938,129 | MDU6SXNzdWU1NjE5MzgxMjk= | 2,781 | Flaky TF pipelines test on CircleCI | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"Indeed, this is a recurring error. I have not yet found the time to dive into it yet, we should do so shortly.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | CONTRIBUTOR | null | Environment: CircleCI
Test: `tests/test_pipelines.py::MultiColumnInputTestCase::test_tf_question_answering`
Traceback: https://circleci.com/gh/huggingface/transformers/15691?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
Diff where I changed nothing relevant and the test started passing: https://github.com/huggingface/transformers/pull/2745/commits/a4edf2e878d23346f45715ac213f1f870ae8ec0c
Happy to look deeper if helpful! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2781/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2780/comments | https://api.github.com/repos/huggingface/transformers/issues/2780/events | https://github.com/huggingface/transformers/issues/2780 | 561,926,066 | MDU6SXNzdWU1NjE5MjYwNjY= | 2,780 | Pipelines- if initial model download is interrupted, everything is ruined | {
"login": "KChalk",
"id": 11653160,
"node_id": "MDQ6VXNlcjExNjUzMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/11653160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KChalk",
"html_url": "https://github.com/KChalk",
"followers_url": "https://api.github.com/users/KChalk/followers",
"following_url": "https://api.github.com/users/KChalk/following{/other_user}",
"gists_url": "https://api.github.com/users/KChalk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KChalk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KChalk/subscriptions",
"organizations_url": "https://api.github.com/users/KChalk/orgs",
"repos_url": "https://api.github.com/users/KChalk/repos",
"events_url": "https://api.github.com/users/KChalk/events{/privacy}",
"received_events_url": "https://api.github.com/users/KChalk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): pipeline('ner') and pipeline('feature-extraction')
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
- [x] my own modified scripts: (give details below)
The tasks I am working on is:
- [x] (mostly NA) my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. fresh install transformers from source
2. run:
```python
from transformers import pipeline
model =pipeline('feature-extraction')
```
3. interrupt download. rerun #2
Error on reload:
```python
Downloading: 100%
230/230 [00:01<00:00, 136B/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
466 try:
--> 467 state_dict = torch.load(resolved_archive_file, map_location="cpu")
468 except Exception:
~/miniconda3/envs/hugging/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module)
357 try:
--> 358 return _load(f, map_location, pickle_module)
359 finally:
~/miniconda3/envs/hugging/lib/python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module)
548 assert key in deserialized_objects
--> 549 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
550 offset = None
RuntimeError: unexpected EOF. The file might be corrupted.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-26-2fd4b689c1db> in <module>
----> 1 featify=pipeline('feature-extraction')
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, modelcard, **kwargs)
1084 "Trying to load the model with Tensorflow."
1085 )
-> 1086 model = model_class.from_pretrained(model, config=config, **model_kwargs)
1087
1088 return task(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, **kwargs)
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
375 for config_class, model_class in MODEL_MAPPING.items():
376 if isinstance(config, config_class):
--> 377 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
378 raise ValueError(
379 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
468 except Exception:
469 raise OSError(
--> 470 "Unable to load weights from pytorch checkpoint file. "
471 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
472 )
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
## Expected behavior
Model should (download and) load.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: WSL
- Python version: 3.7.6.final.0
- PyTorch version (GPU?): 0.4.1 (no)
- Tensorflow version (GPU?): none
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2779/comments | https://api.github.com/repos/huggingface/transformers/issues/2779/events | https://github.com/huggingface/transformers/issues/2779 | 561,890,394 | MDU6SXNzdWU1NjE4OTAzOTQ= | 2,779 | configuration from custom config file not working | {
"login": "mainulquraishi",
"id": 14335238,
"node_id": "MDQ6VXNlcjE0MzM1MjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mainulquraishi",
"html_url": "https://github.com/mainulquraishi",
"followers_url": "https://api.github.com/users/mainulquraishi/followers",
"following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}",
"gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions",
"organizations_url": "https://api.github.com/users/mainulquraishi/orgs",
"repos_url": "https://api.github.com/users/mainulquraishi/repos",
"events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mainulquraishi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Did you try using `GPT2Config.from_json_file('xxx.json')`? `from_pretrained` should be used when pointing to a directory containing a `config.json` file.",
"yes it is working now ",
"Great to hear!"
] | 1,581 | 1,581 | 1,581 | NONE | null | I am trying to get the configuration from a custom config file by the following line :
`config = GPT2Config.from_pretrained("./lm/gpt2-xl/lm/my_config.json")`
This is similar to the example of this [page]. But I am getting the following error:
```
OSError: Model name './lm/gpt2-xl/lm/my_config.json' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/./lm/gpt2-xl/lm/my_config.json/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
(https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig)
```
Am I missing something? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2779/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2778/comments | https://api.github.com/repos/huggingface/transformers/issues/2778/events | https://github.com/huggingface/transformers/pull/2778 | 561,854,929 | MDExOlB1bGxSZXF1ZXN0MzcyNTg3MTQx | 2,778 | Preserve spaces in GPT-2 tokenizers | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're correct about the GPT-2 tokenizer βΒ I failed to consider that GPT2 doesn't have a BOS token. I've pushed an alternative solution that defines a base `prepare_for_tokenization` method which children can override to make changes to the text before tokenization.\r\n\r\nAs for your second point, the changes are made where sequences are encoded in different ways and then compared. The clearest example is probably [here](https://github.com/huggingface/transformers/pull/2778/files#diff-1ca2285a5350e3d634978637356a9bdbR266-R267). The first encode is done with `add_special_tokens=False` whereas the second is done with `add_special_tokens=True`. Since adding special tokens now also adds a prefix space by default in RoBERTa, it's necessary to add `add_prefix_space=False` in the second encode so that the results are consistent.",
"Cool! I believe the `run_tests_tf` is failing due to a tokenization error (linked with your PR).",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=h1) Report\n> Merging [#2778](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73368963b200f2d70d2267bd49a3fa794850b3ff?src=pr&el=desc) will **decrease** coverage by `1.05%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2778 +/- ##\n==========================================\n- Coverage 75.09% 74.03% -1.06% \n==========================================\n Files 93 93 \n Lines 15250 15263 +13 \n==========================================\n- Hits 11452 11300 -152 \n- Misses 3798 3963 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.26% <100%> (+0.05%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.1% <100%> (+0.41%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.42% <100%> (-0.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.94% <0%> (-2.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.06% <0%> (-1.33%)` | :arrow_down: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=footer). Last update [7336896...a7bacfa](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks like it was an incorrect pipelines test which was treating each test sentence as one concat'd sequence, [here](https://github.com/huggingface/transformers/pull/2778/files#diff-ca5a8abd41d5c7bd3e6da1636c531976R97).",
"Yap at the moment on the Rust side, there is also an issue regarding the tokenization of Roberta, but which is slightly different from the one here.\r\n\r\nWhen this PR land to master i'll rebase the tokenizers-v2 branch to run all the new tests that the branch brings and see if there is nothing breaking :).\r\n\r\nIt looks great to me π "
] | 1,581 | 1,582 | 1,581 | CONTRIBUTOR | null | **The issue**: The GPT-2 and RoBERTa tokenizers are incorrectly stripping whitespace following special characters, preventing the BPE encoder from correctly encoding spaces in tokens following RoBERTa `<mask>` and `<unk>` tokens.
```
tokenizer.convert_ids_to_tokens(tokenizer.encode('She likes <mask> cats.'))
# output: ['<s>', 'She', 'Δ likes', '<mask>', 'cats', '.', '</s>']
# should be: ['<s>', 'Δ She', 'Δ likes', '<mask>', 'Δ cats', '.', '</s>']
```
This makes the model inputs (and therefore outputs) incorrect. This issue manifests itself in the `fill-mask` pipeline where the model erroneously thinks the mask is a prefix to the following word when using RoBERTa:
```
roberta_fillmask = pipeline("fill-mask")
sentence = "She likes <mask> cats."
roberta_fillmask(sentence)
# top predictions: "She likes bobcats.", "She likes pussycats."
```
This PR makes the following changes:
- Preserves trailing whitespace following special tokens
- Inserts a space after the prepended start token when `add_special_tokens` is `True` in `encode()` so that the user doesn't have to include a leading space in the string. This can be overriden with the `add_prefix_space` argument.
- Adds a `framework` argument to the `pipeline` factory function, allowing users to easily specify TF vs PyTorch
After making these changes, the top predictions from the above example become 'She likes cute cats.' and 'She likes her cats.' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2778/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2778/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2778",
"html_url": "https://github.com/huggingface/transformers/pull/2778",
"diff_url": "https://github.com/huggingface/transformers/pull/2778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2778.patch",
"merged_at": 1581618584000
} |
https://api.github.com/repos/huggingface/transformers/issues/2777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2777/comments | https://api.github.com/repos/huggingface/transformers/issues/2777/events | https://github.com/huggingface/transformers/pull/2777 | 561,824,178 | MDExOlB1bGxSZXF1ZXN0MzcyNTYyMTUy | 2,777 | distilbert-base-cased | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"do we want to actually change the model used in the pipeline?",
"> do we want to actually change the model used in the pipeline?\r\n\r\nI'm not sure to understand the rationale behind the question.\r\nPurely from a perf point of view, it's the same inf speed, while having better metrics than before.",
"Nevermind the failing test, it's a Heisenbug. Merge when ready."
] | 1,581 | 1,581 | 1,581 | MEMBER | null | - Weights
- Readmes and docs
- Previous omissions
weights are uploaded on s3, along with modelcards.
@LysandreJik Could you make sure I didn't forget anything?
@mfuntowicz Could have a check on the pipeline part? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2777",
"html_url": "https://github.com/huggingface/transformers/pull/2777",
"diff_url": "https://github.com/huggingface/transformers/pull/2777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2777.patch",
"merged_at": 1581107294000
} |
https://api.github.com/repos/huggingface/transformers/issues/2776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2776/comments | https://api.github.com/repos/huggingface/transformers/issues/2776/events | https://github.com/huggingface/transformers/issues/2776 | 561,766,372 | MDU6SXNzdWU1NjE3NjYzNzI= | 2,776 | Pipeline for text classification | {
"login": "AlecS12",
"id": 1517014,
"node_id": "MDQ6VXNlcjE1MTcwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1517014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlecS12",
"html_url": "https://github.com/AlecS12",
"followers_url": "https://api.github.com/users/AlecS12/followers",
"following_url": "https://api.github.com/users/AlecS12/following{/other_user}",
"gists_url": "https://api.github.com/users/AlecS12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlecS12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlecS12/subscriptions",
"organizations_url": "https://api.github.com/users/AlecS12/orgs",
"repos_url": "https://api.github.com/users/AlecS12/repos",
"events_url": "https://api.github.com/users/AlecS12/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlecS12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Did you check the README?\r\n\r\ngrep `text-classification: Initialize a TextClassificationPipeline directly, or see sentiment-analysis for an example.\r\n`",
"Sorry, missed this somehow. Thanks for adding it!\n\nOn Fri, Feb 7, 2020 at 12:38 PM Julien Chaumond <[email protected]>\nwrote:\n\n> Did you check the README?\n>\n> grep text-classification: Initialize a TextClassificationPipeline\n> directly, or see sentiment-analysis for an example.\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2776?email_source=notifications&email_token=AALSLVVTFNJWUYUPOVQ7PVLRBWMBJA5CNFSM4KRR6SN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELD46DA#issuecomment-583520012>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AALSLVWTPQIT7XNKXLWZSYLRBWMBJANCNFSM4KRR6SNQ>\n> .\n>\n",
"@julien-c trying that throws: \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-7-b612b71d2864> in <module>\r\n 1 sentiment_analysis = pipeline('sentiment-analysis')\r\n----> 2 text_classification = pipeline('text-classification')\r\n\r\n~/SpacedOut/engage-sentiment/.venv/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, modelcard, framework, **kwargs)\r\n 1024 # Retrieve the task\r\n 1025 if task not in SUPPORTED_TASKS:\r\n-> 1026 raise KeyError(\"Unknown task {}, available tasks are {}\".format(task, list(SUPPORTED_TASKS.keys())))\r\n 1027 \r\n 1028 framework = framework or get_framework(model)\r\n\r\nKeyError: \"Unknown task text-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask']\"\r\n```",
"you should import and use `TextClassificationPipeline` directly (i.e. there isn't a shortcut to use in `pipeline()`)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm going to ask the stupid question, and say there are no tutorial or code examples for `TextClassificationPipeline`. I mean I can dig up the source code, but documentation without examples is never my thing. Would be helpful if I know the data format for `run_tf_text_classification.py` as well. I guess what I'm asking is to finetune a text classification model, but the example at https://huggingface.co/transformers/custom_datasets.html is way too long. Quoting a meme, \"ain't nobody got time for that\". "
] | 1,581 | 1,615 | 1,588 | NONE | null | # π Feature request
Could you please add a text classification pipeline?
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2776/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2775/comments | https://api.github.com/repos/huggingface/transformers/issues/2775/events | https://github.com/huggingface/transformers/issues/2775 | 561,765,704 | MDU6SXNzdWU1NjE3NjU3MDQ= | 2,775 | Using fast tokenizers with pipelines | {
"login": "AlecS12",
"id": 1517014,
"node_id": "MDQ6VXNlcjE1MTcwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1517014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlecS12",
"html_url": "https://github.com/AlecS12",
"followers_url": "https://api.github.com/users/AlecS12/followers",
"following_url": "https://api.github.com/users/AlecS12/following{/other_user}",
"gists_url": "https://api.github.com/users/AlecS12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlecS12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlecS12/subscriptions",
"organizations_url": "https://api.github.com/users/AlecS12/orgs",
"repos_url": "https://api.github.com/users/AlecS12/repos",
"events_url": "https://api.github.com/users/AlecS12/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlecS12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @AlecS12, \r\n\r\nI'm currently working on integrating tokenizers library inside transformers with pipelines support.\r\n\r\nIt should not be long now before it lang on master / get released.\r\n\r\nYou can track the development here: https://github.com/huggingface/transformers/pull/2674\r\nIf you want to checkout out the branch **tokenizers-v2** and give it a try, I'm more than happy to get your feedback.\r\n\r\nMorgan",
"Hi @mfuntowicz,\r\n\r\nThat's great news. I checked out tokenizers-v2 and tried it in a web server (flask) and jupyterlab. In both cases got the same error. Could you please look into this?\r\n\r\n```python\r\nfrom transformers import pipeline\r\nnlp = pipeline('question-answering', model='bert-large-uncased-whole-word-masking-finetuned-squad')\r\nnlp({\r\n 'question': 'Where is the cookie?',\r\n 'context': 'I keep cookies in a red plastic container.'\r\n})\r\n\r\n...\r\nI0212 15:15:53.546577 140155558979392 modeling_utils.py:456] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin from cache at /home/a652726/.cache/torch/transformers/ca2ac20761877486c1e2204d99653106b9adacf9a5eb18ec71b41d2dbef42103.2db7ae79c41a184c87600faabafa1369db2b16457723fd154ca3b436c4172807\r\nconvert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"/data/home/a652726/transformers/src/transformers/data/processors/squad.py\", line 141, in squad_convert_example_to_features\r\n truncation_strategy=\"only_second\" if tokenizer.padding_side == \"right\" else \"only_first\",\r\n File \"/data/home/a652726/transformers/src/transformers/tokenization_utils.py\", line 1741, in encode_plus\r\n **kwargs,\r\n File \"/data/home/a652726/transformers/src/transformers/tokenization_utils.py\", line 1676, in batch_encode_plus\r\n tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])\r\n File \"/home/a652726/miniconda3/envs/nlp2/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py\", line 131, in encode\r\n return self._tokenizer.encode(sequence, pair)\r\nTypeError\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-c0bdf7f90854> in <module>\r\n 3 nlp({\r\n 4 'question': 'Where is the cookie?',\r\n----> 5 'context': 'I keep cookies in the red plastic container.'\r\n 6 })\r\n\r\n...\r\n```\r\n",
"I will definitively have a look, and will keep you posted.\r\n\r\nThanks for reporting",
"Hi @mfuntowicz,\r\n\r\nI installed the latest 2.5.1 release and the pipeline error is still there. Had to roll back to 2.4.1.",
"release 2.5.1 does not produce the error by default anymore, because it changed the default Autotokenizer to slow, but the bug is still there:\r\n\r\n```python\r\nimport transformers\r\nfrom transformers import pipeline\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"bert-base-uncased\", use_fast=True)\r\nnlp = pipeline('question-answering', model='bert-large-uncased-whole-word-masking-finetuned-squad', tokenizer=tokenizer)\r\nnlp({\r\n 'question': 'Where is the cookie?',\r\n 'context': 'I keep cookies in the red plastic container.'\r\n})\r\n\r\nnlp({\r\n 'question': 'Where is the cookie?',\r\n 'context': 'I keep cookies in the red plastic container.'\r\n})\r\n\r\n\r\nconvert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]W0226 10:50:27.573424 140277524375360 tokenization_utils.py:1782] Fast tokenizers add special tokens by default. To remove special tokens, please specify`add_special_tokens=False` during the initialisation rather than when calling `encode`,`encode_plus` or `batch_encode_plus`.\r\nW0226 10:50:27.576760 140277524375360 tokenization_utils.py:1782] Fast tokenizers add special tokens by default. To remove special tokens, please specify`add_special_tokens=False` during the initialisation rather than when calling `encode`,`encode_plus` or `batch_encode_plus`.\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"/data/home/a652726/transformers/src/transformers/data/processors/squad.py\", line 141, in squad_convert_example_to_features\r\n truncation_strategy=\"only_second\" if tokenizer.padding_side == \"right\" else \"only_first\",\r\n File \"/data/home/a652726/transformers/src/transformers/tokenization_utils.py\", line 1889, in encode_plus\r\n **kwargs,\r\n File \"/data/home/a652726/transformers/src/transformers/tokenization_utils.py\", line 1815, in batch_encode_plus\r\n tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])\r\n File \"/home/a652726/miniconda3/envs/nlp2/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py\", line 141, in encode\r\n return self._tokenizer.encode(sequence, pair)\r\nTypeError\r\n\"\"\"\r\n\r\n'''\r\n",
"Hi @AlecS12, \r\n\r\nThanks for trying out 2.5.1. The issue is still there because for the question-answering pipeline we're relying on a method from the squad data processor `squad_convert_example_to_feature` which is not compatible which the fast tokenizers.\r\n\r\nI'll have soon have a look at this to make it compatible with the fast tokenizers. \r\n\r\nSorry for the inconvenience. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @mfuntowicz,,\r\n\r\nThe problem is still there in 2.10.1. Could you please reopen the issue and fix it?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,596 | 1,596 | NONE | null | # π Feature request
Currently tokenizers are not working with QA pipeline, because they do not have the tokenize method implemented. Speeding up the tokenization would be really beneficial for my application.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2775/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2774/comments | https://api.github.com/repos/huggingface/transformers/issues/2774/events | https://github.com/huggingface/transformers/issues/2774 | 561,755,729 | MDU6SXNzdWU1NjE3NTU3Mjk= | 2,774 | embedding index getting out of range while running gpt2-xl model | {
"login": "mainulquraishi",
"id": 14335238,
"node_id": "MDQ6VXNlcjE0MzM1MjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mainulquraishi",
"html_url": "https://github.com/mainulquraishi",
"followers_url": "https://api.github.com/users/mainulquraishi/followers",
"following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}",
"gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions",
"organizations_url": "https://api.github.com/users/mainulquraishi/orgs",
"repos_url": "https://api.github.com/users/mainulquraishi/repos",
"events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mainulquraishi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Indeed, there was an error in the code, thank you for letting us know! I've patched it with fd639e5be31f83447c37cf79023fd98bac29f86c.\r\n\r\nIt is now [updated in the docs](https://huggingface.co/transformers/quickstart.html#using-the-past). Thanks!"
] | 1,581 | 1,581 | 1,581 | NONE | null | I am trying to run [hugginface][1] gpt2-xl model. I ran code from the [quickstart][2] page that load the small gpt2 model and generate text by the following code:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[0, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
This is running perfectly. Then I try to run `gpt2-xl` model.
I changed `tokenizer` and `model` loading code like following:
```
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-xl")
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
```
The `tokenizer` and `model` loaded perfectly. But I a getting error on the following line:
` output, past = model(context, past=past)`
The error is:
` RuntimeError: index out of range: Tried to access index 204483 out of table with 50256 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
`
Looking at error it seems that the embedding size is not correct. So I write the following line to specifically fetch the config file of `gpt2-xl`:
` config = GPT2Config.from_pretrained("gpt2-xl")`
But, here `vocab_size:50257`
So I changed explicitly the value by:
` config.vocab_size=204483`
Then after printing the `config`, I can see that the previous line took effect in the configuration. But still, I am getting the same error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2774/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2773/comments | https://api.github.com/repos/huggingface/transformers/issues/2773/events | https://github.com/huggingface/transformers/issues/2773 | 561,565,418 | MDU6SXNzdWU1NjE1NjU0MTg= | 2,773 | How to load a pretrained TF model using AutoModel? | {
"login": "erikchwang",
"id": 16256959,
"node_id": "MDQ6VXNlcjE2MjU2OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikchwang",
"html_url": "https://github.com/erikchwang",
"followers_url": "https://api.github.com/users/erikchwang/followers",
"following_url": "https://api.github.com/users/erikchwang/following{/other_user}",
"gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions",
"organizations_url": "https://api.github.com/users/erikchwang/orgs",
"repos_url": "https://api.github.com/users/erikchwang/repos",
"events_url": "https://api.github.com/users/erikchwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikchwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @erikchwang, you should use `TFAutoModel` instead",
"Is this TFAutoModel mentioned in the document? I cannot find it...",
"I'll add it to the model pages soon. Thanks!",
"when I load TF model by use AutoModel from your document, there are many errors, like this \r\n`model = AutoModel.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_model.ckpt.index', from_tf=True, config=config)\r\n`\r\n\r\n\r\nwhen I used TFAutoModel to load a model, there is like this\r\n`model = TFAutoModel.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_model.ckpt.index', config=config)\r\n`\r\n\r\n\r\nI tried many functions to load TF Pretraining model in your document, most of them appeared errors\r\n",
"I can't able to load model for model = TFAutoModel.from_pretrained(\"emilyalsentzer/Bio_ClinicalBERT\")\r\nand\r\nTFAutoModel\r\n .from_pretrained('microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext')\r\nanyone ?",
"> I can't able to load model for model = TFAutoModel.from_pretrained(\"emilyalsentzer/Bio_ClinicalBERT\")\r\n> and\r\n> TFAutoModel\r\n> .from_pretrained('microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext')\r\n> anyone ?\r\n\r\nHI @blmali, I had the same issue when trying to load \"emilyalsentzer/Bio_Discharge_Summary_BERT\". I solved it by passing `from_pt` argument as `True`:\r\n`model = TFAutoModel.from_pretrained(\"emilyalsentzer/Bio_Discharge_Summary_BERT\", from_pt=True)`.\r\n\r\nI hope this helps."
] | 1,581 | 1,603 | 1,581 | NONE | null | Run the following code:
```
import tensorflow as tf
from transformers import AutoModel, TFBertModel
auto_model = AutoModel.from_pretrained("bert-base-uncased")
tfbert_model = TFBertModel.from_pretrained("bert-base-uncased")
print(auto_model.__class__)
print(tfbert_model.__class__)
```
Then the output is:
```
<class 'transformers.modeling_bert.BertModel'>
<class 'transformers.modeling_tf_bert.TFBertModel'>
```
It seems that AutoModel defaultly loads the pretrained PyTorch models, but how can I use it to load a pretrained TF model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2773/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2772/comments | https://api.github.com/repos/huggingface/transformers/issues/2772/events | https://github.com/huggingface/transformers/issues/2772 | 561,506,277 | MDU6SXNzdWU1NjE1MDYyNzc= | 2,772 | How to generate different suggestions with GPT2 or XLNet like Write With Transformers? | {
"login": "cppntn",
"id": 26765504,
"node_id": "MDQ6VXNlcjI2NzY1NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cppntn",
"html_url": "https://github.com/cppntn",
"followers_url": "https://api.github.com/users/cppntn/followers",
"following_url": "https://api.github.com/users/cppntn/following{/other_user}",
"gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cppntn/subscriptions",
"organizations_url": "https://api.github.com/users/cppntn/orgs",
"repos_url": "https://api.github.com/users/cppntn/repos",
"events_url": "https://api.github.com/users/cppntn/events{/privacy}",
"received_events_url": "https://api.github.com/users/cppntn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I closed this issue since I found useful to set the `do_sample` argument to True, as mentioned in this issue: https://github.com/huggingface/transformers/issues/2415"
] | 1,581 | 1,581 | 1,581 | NONE | null | Hello,
I want to generate with run_generation more different suggestions of the next words, preferably with variable length and different terms or synonyms, like it is done in Write With Transformer.
Any suggestion or idea on how to achieve this?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2772/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2771/comments | https://api.github.com/repos/huggingface/transformers/issues/2771/events | https://github.com/huggingface/transformers/issues/2771 | 561,455,322 | MDU6SXNzdWU1NjE0NTUzMjI= | 2,771 | export to onnx issue | {
"login": "jian16",
"id": 18178108,
"node_id": "MDQ6VXNlcjE4MTc4MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/18178108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jian16",
"html_url": "https://github.com/jian16",
"followers_url": "https://api.github.com/users/jian16/followers",
"following_url": "https://api.github.com/users/jian16/following{/other_user}",
"gists_url": "https://api.github.com/users/jian16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jian16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jian16/subscriptions",
"organizations_url": "https://api.github.com/users/jian16/orgs",
"repos_url": "https://api.github.com/users/jian16/repos",
"events_url": "https://api.github.com/users/jian16/events{/privacy}",
"received_events_url": "https://api.github.com/users/jian16/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,586 | 1,586 | NONE | null | Hi experts,
I got an error when running the onnx model after conversion. Can anyone please help to take a look?
code:
`torch.onnx.export(model,
(input_ids, attention_mask, token_type_ids),
"bert.onnx",
input_names=['input_ids', 'attention_mask', 'token_type_ids'],
export_params=True, verbose=True)`
`sess = rt.InferenceSession("bert.onnx")
inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}
outputs = sess.run(None, inputs)
`
error:
Traceback (most recent call last):
File "test.py", line 29, in <module>
outputs = sess.run(None, inputs)
File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py", line 142, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2771/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2770/comments | https://api.github.com/repos/huggingface/transformers/issues/2770/events | https://github.com/huggingface/transformers/issues/2770 | 561,425,701 | MDU6SXNzdWU1NjE0MjU3MDE= | 2,770 | The prediction output is random | {
"login": "Mozen",
"id": 32283954,
"node_id": "MDQ6VXNlcjMyMjgzOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/32283954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mozen",
"html_url": "https://github.com/Mozen",
"followers_url": "https://api.github.com/users/Mozen/followers",
"following_url": "https://api.github.com/users/Mozen/following{/other_user}",
"gists_url": "https://api.github.com/users/Mozen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mozen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mozen/subscriptions",
"organizations_url": "https://api.github.com/users/Mozen/orgs",
"repos_url": "https://api.github.com/users/Mozen/repos",
"events_url": "https://api.github.com/users/Mozen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mozen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Mozen, \r\nYou will need to train your model for sequence classification first.\r\nThe pre-trained models are not yet trained for the downstream task. So right now, you have an untrained sequence classification head on top of Bert. \r\n\r\nI could not find where it was mentioned in the docs, but have a look at [this comment](https://github.com/huggingface/transformers/issues/1979#issuecomment-559597512).",
"@jwallat okοΌ thanks a lotοΌ",
"Please close the question if your question is answered."
] | 1,581 | 1,581 | 1,581 | NONE | null | When I use the official example scripts to predict a text sentence classification model, I found that the output is different every timeγ
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
import numpy as np
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
model.eval()
input_ids = torch.tensor(tokenizer.encode("my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs= model(input_ids)
print(outputs)
```
fist result :(tensor([[-0.1939, 0.1449]], grad_fn=<AddmmBackward>),)
second result:(tensor([[-0.2425, -0.2737]], grad_fn=<AddmmBackward>),)
third result: (tensor([[ 0.0494, -0.7208]], grad_fn=<AddmmBackward>),)
......
I expected the outputs to be the same... am I doing this wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2770/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2769/comments | https://api.github.com/repos/huggingface/transformers/issues/2769/events | https://github.com/huggingface/transformers/issues/2769 | 561,390,917 | MDU6SXNzdWU1NjEzOTA5MTc= | 2,769 | Model download: tf-xlm-roberta-large "tf_model.h5" file missing | {
"login": "paradc2",
"id": 5579901,
"node_id": "MDQ6VXNlcjU1Nzk5MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5579901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paradc2",
"html_url": "https://github.com/paradc2",
"followers_url": "https://api.github.com/users/paradc2/followers",
"following_url": "https://api.github.com/users/paradc2/following{/other_user}",
"gists_url": "https://api.github.com/users/paradc2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paradc2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paradc2/subscriptions",
"organizations_url": "https://api.github.com/users/paradc2/orgs",
"repos_url": "https://api.github.com/users/paradc2/repos",
"events_url": "https://api.github.com/users/paradc2/events{/privacy}",
"received_events_url": "https://api.github.com/users/paradc2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same for roberta-base and roberta-large",
"cc @jplu :)",
"@jplu Renamed the file using `aws s3 mv s3://models.huggingface.co/bert/jplu/tf-xlm-roberta-large/xlm-roberta-large-tf_model.h5 s3://models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5`\r\n\r\nDoes it work now @paradc2 ?\r\n\r\n@RichJackson Which exact models are you talking about?",
"humm this is weird I was sure to have properly named the models... This is certainly my bad then. I'm really sorry guys!\r\n\r\nHere what I have when I `ls` my repo:\r\n```\r\n(transformers) ββ[jplu@robinson] - [~/transformers] - [ven. fΓ©vr. 07, 16:05]\r\nββ[$] <git:(fix-tf-distil*)> ./transformers-cli s3 ls \r\nFilename LastModified ETag Size \r\n------------------------------------ ------------------------ -------------------------------------- ---------- \r\ntf-camembert-base/config.json 2020-01-31T23:00:26.000Z \"da462af1da162d7145bf47f066533574\" 596 \r\ntf-camembert-base/tf_model.h5 2020-01-30T12:25:25.000Z \"fbce3cf6602dbb56daf6ea2b9642eefc\" 545172724 \r\ntf-flaubert-base-cased/config.json 2020-01-31T23:00:26.000Z \"b1bb00ff27331cee714b82d659b18d0e\" 942 \r\ntf-flaubert-base-cased/tf_model.h5 2020-01-31T16:53:31.000Z \"1418889252dda2462c2e8b8b0b74010d\" 764558620 \r\ntf-flaubert-base-uncased/config.json 2020-01-31T23:00:26.000Z \"b88f774bef4f4ab20748b728441fd03e\" 942 \r\ntf-flaubert-base-uncased/tf_model.h5 2020-01-31T16:54:12.000Z \"db954070da0d1435e07ae67713de63c3\" 757260944 \r\ntf-flaubert-large-cased/config.json 2020-01-31T23:00:26.000Z \"e0a5f3081bbb858a0096daa18a55157d\" 1030 \r\ntf-flaubert-large-cased/tf_model.h5 2020-01-31T16:55:28.000Z \"10b53d7cec21cc2d5a28a8d6a225e0ad\" 1775057844 \r\ntf-flaubert-small-cased/config.json 2020-01-31T23:00:27.000Z \"b4fe61d6ed58fbbc00d3f5aca3a23829\" 1007 \r\ntf-flaubert-small-cased/tf_model.h5 2020-01-31T16:54:58.000Z \"a8c6e15d7434dca7d49f1666b4933f2a\" 358615548 \r\ntf-xlm-roberta-base/config.json 2020-01-31T23:00:27.000Z \"3bb4d32c4818bf4ce53021f6ce7839df\" 737 \r\ntf-xlm-roberta-base/tf_model.h5 2020-01-30T10:30:20.000Z \"248f95f776e119c46132860f11085c2d\" 1885418496 \r\ntf-xlm-roberta-large/config.json 2020-01-31T23:00:27.000Z \"d6f295d68b0414208f5fc1cbc2f0dce6\" 738 \r\ntf-xlm-roberta-large/tf_model.h5 2020-02-07T14:51:57.000Z \"44602b7afc746bc6971e793f4534dcf0-390\" 3271420488\r\n```",
"here's the exception:\r\n\r\n\r\n```\r\n02/07/2020 15:03:17 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at /home/kxfv271/.cache/torch/transformers/d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b\r\n02/07/2020 15:03:17 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at /home/kxfv271/.cache/torch/transformers/b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda\r\n02/07/2020 15:03:18 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin from cache at None\r\nTraceback (most recent call last):\r\n File \"<masked>lib/python3.7/site-packages/torch/serialization.py\", line 289, in _check_seekable\r\n f.seek(f.tell())\r\nAttributeError: 'NoneType' object has no attribute 'seek'\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"<masked>/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 467, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n File \"<masked/>lib/python3.7/site-packages/torch/serialization.py\", line 525, in load\r\n with _open_file_like(f, 'rb') as opened_file:\r\n File \"<masked>/lib/python3.7/site-packages/torch/serialization.py\", line 217, in _open_file_like\r\n return _open_buffer_reader(name_or_buffer)\r\n File \"<masked>/lib/python3.7/site-packages/torch/serialization.py\", line 202, in __init__\r\n _check_seekable(buffer)\r\n File \"<masked>/lib/python3.7/site-packages/torch/serialization.py\", line 292, in _check_seekable\r\n raise_err_msg([\"seek\", \"tell\"], e)\r\n File \"<masked>/lib/python3.7/site-packages/torch/serialization.py\", line 285, in raise_err_msg\r\n raise type(e)(msg)\r\nAttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/data/home/kxfv271/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py\", line 197, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/data/home/kxfv271/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/datadrive/pycharm_project_817/aznlp_tools/rbert_paper/rbert_ablations.py\", line 1182, in <module>\r\n main()\r\n File \"/datadrive/pycharm_project_817/aznlp_tools/rbert_paper/rbert_ablations.py\", line 1108, in main\r\n cache_dir=args.cache_dir if args.cache_dir else None\r\n File \"<masked>/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 470, in from_pretrained\r\n \"Unable to load weights from pytorch checkpoint file. \"\r\nOSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \r\n```\r\n\r\nlooks like the vocab and config files are still available?",
"@RichJackson, this is a different error to @paradc2. Could you show us which command raised this error?",
"I'm running a (modified) version of the run_glue.py example. I think the problem is on [this line](https://github.com/huggingface/transformers/blob/73368963b200f2d70d2267bd49a3fa794850b3ff/examples/run_glue.py#L634). If you don't provide a --cache-dir argument, this evaluates to None? Hence the above log line:\r\n\r\n```\r\n02/07/2020 15:03:18 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin from cache at None\r\n```\r\n\r\ni.e. model links seem to be ok",
"do you mind opening a new issue for this?",
"@julien-c yes, the tf-xlm-roberta-large download works as expected now. Thanks!",
"> \r\n> \r\n> I'm running a (modified) version of the run_glue.py example. I think the problem is on [this line](https://github.com/huggingface/transformers/blob/73368963b200f2d70d2267bd49a3fa794850b3ff/examples/run_glue.py#L634). If you don't provide a --cache-dir argument, this evaluates to None? Hence the above log line:\r\n> \r\n> ```\r\n> 02/07/2020 15:03:18 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin from cache at None\r\n> ```\r\n> \r\n> i.e. model links seem to be ok\r\n\r\nHi Sir, i have the same problem with Camembert while fine-tuning on FQUAD. any solutions Sir ?"
] | 1,581 | 1,588 | 1,581 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): tf-xlm-roberta-large
The "tf_model.h5" file for tf-xlm-roberta-large appears to be missing as the following url from model hub is returning "NoSuchKey" errors: https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5
If intentional, is it being reupload soon? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2769/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2768/comments | https://api.github.com/repos/huggingface/transformers/issues/2768/events | https://github.com/huggingface/transformers/issues/2768 | 561,387,126 | MDU6SXNzdWU1NjEzODcxMjY= | 2,768 | why take the first hidden state for sequence classification (DistilBertForSequenceClassification) | {
"login": "junhuang-ifast",
"id": 47650501,
"node_id": "MDQ6VXNlcjQ3NjUwNTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/47650501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junhuang-ifast",
"html_url": "https://github.com/junhuang-ifast",
"followers_url": "https://api.github.com/users/junhuang-ifast/followers",
"following_url": "https://api.github.com/users/junhuang-ifast/following{/other_user}",
"gists_url": "https://api.github.com/users/junhuang-ifast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junhuang-ifast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junhuang-ifast/subscriptions",
"organizations_url": "https://api.github.com/users/junhuang-ifast/orgs",
"repos_url": "https://api.github.com/users/junhuang-ifast/repos",
"events_url": "https://api.github.com/users/junhuang-ifast/events{/privacy}",
"received_events_url": "https://api.github.com/users/junhuang-ifast/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,586 | 1,586 | NONE | null | In the last few layers of sequence classification [here][1], the first hidden state of the sequence length of the transformer output to be used for classification.
hidden_state = distilbert_output[0] # (bs, seq_len, dim) <-- transformer output
pooled_output = hidden_state[:, 0] # (bs, dim) <-- first hidden state
pooled_output = self.pre_classifier(pooled_output) # (bs, dim)
pooled_output = nn.ReLU()(pooled_output) # (bs, dim)
pooled_output = self.dropout(pooled_output) # (bs, dim)
logits = self.classifier(pooled_output) # (bs, dim)
Is there any benefit to taking the first hidden state over the last, average, or even the use of a Flatten layer instead?
I've also asked this question on [Stack Overflow](https://stackoverflow.com/questions/60087613/why-take-the-first-hidden-state-for-sequence-classification-distilbertforsequen)
[1]: https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/src/transformers/modeling_distilbert.py#L634 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2767/comments | https://api.github.com/repos/huggingface/transformers/issues/2767/events | https://github.com/huggingface/transformers/issues/2767 | 561,366,595 | MDU6SXNzdWU1NjEzNjY1OTU= | 2,767 | Adapter-BERT is missing in transformers library? | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Up, I think this is an awesome idea",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,588 | 1,588 | NONE | null | Adapter BERT obtain comparable results to BERT on several NLP tasks while achieving parameter efficiency. ( https://github.com/google-research/adapter-bert ) @thomwolf
I think, it will be useful if adapter-bert is also included in the library.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2767/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2766/comments | https://api.github.com/repos/huggingface/transformers/issues/2766/events | https://github.com/huggingface/transformers/pull/2766 | 561,360,496 | MDExOlB1bGxSZXF1ZXN0MzcyMTg2MzMx | 2,766 | Fix documentation in ProjectedAdaptiveLogSoftmax | {
"login": "ari-holtzman",
"id": 20871523,
"node_id": "MDQ6VXNlcjIwODcxNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/20871523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ari-holtzman",
"html_url": "https://github.com/ari-holtzman",
"followers_url": "https://api.github.com/users/ari-holtzman/followers",
"following_url": "https://api.github.com/users/ari-holtzman/following{/other_user}",
"gists_url": "https://api.github.com/users/ari-holtzman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ari-holtzman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ari-holtzman/subscriptions",
"organizations_url": "https://api.github.com/users/ari-holtzman/orgs",
"repos_url": "https://api.github.com/users/ari-holtzman/repos",
"events_url": "https://api.github.com/users/ari-holtzman/events{/privacy}",
"received_events_url": "https://api.github.com/users/ari-holtzman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=h1) Report\n> Merging [#2766](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2766 +/- ##\n======================================\n Coverage 75.1% 75.1% \n======================================\n Files 93 93 \n Lines 15249 15249 \n======================================\n Hits 11452 11452 \n Misses 3797 3797\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/2766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `53.33% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=footer). Last update [33d3072...8725c54](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Perfect, thank you!!"
] | 1,581 | 1,581 | 1,581 | CONTRIBUTOR | null | The shape of outputs for forward in ProjectedAdaptiveLogSoftmax is flipped in the documentation: it should be log probabilities when `labels` is `None` and NLLs otherwise. This is what the code does, but the docstring has them flipped. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2766/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2766",
"html_url": "https://github.com/huggingface/transformers/pull/2766",
"diff_url": "https://github.com/huggingface/transformers/pull/2766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2766.patch",
"merged_at": 1581088499000
} |
https://api.github.com/repos/huggingface/transformers/issues/2765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2765/comments | https://api.github.com/repos/huggingface/transformers/issues/2765/events | https://github.com/huggingface/transformers/pull/2765 | 561,324,272 | MDExOlB1bGxSZXF1ZXN0MzcyMTU3MTIw | 2,765 | Add option to `cached_path` to automatically extract archives | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=h1) Report\n> Merging [#2765](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c12464a20160061a8b436b4939e8d5fa2437a15?src=pr&el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `31.03%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2765 +/- ##\n==========================================\n- Coverage 75.09% 74.73% -0.37% \n==========================================\n Files 93 93 \n Lines 15250 15273 +23 \n==========================================\n- Hits 11452 11414 -38 \n- Misses 3798 3859 +61\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.26% <100%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.26% <100%> (-0.07%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `67.74% <25.92%> (-5.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `52.94% <0%> (-21.57%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.62% <0%> (-3.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `84.87% <0%> (-0.82%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=footer). Last update [2c12464...c6c5c3f](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,581 | 1,581 | 1,581 | MEMBER | null | Slight modification to `cached_path` so that zip and tar archives can be automatically extracted.
- archives are extracted in the same directory than the (possibly downloaded) archive in a created extraction directory named from the archive.
- automatic extraction is activated by setting `extract_compressed_file=True` when calling `cached_file`.
- the extraction directory is re-used t avoid extracting the archive again unless we set `force_extract=True`, in which case the cached extraction directory is removed and the archive is extracted again.
Currently not added to the `from_pretrained` methods. Probably better to have the user control this explicitly at this level (by first extracting the archive) => open to discussion though.
Also include a simple proposal to add TF/PT compatibility in hf_buckets (cc @julien-c) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2765/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2765",
"html_url": "https://github.com/huggingface/transformers/pull/2765",
"diff_url": "https://github.com/huggingface/transformers/pull/2765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2765.patch",
"merged_at": 1581339916000
} |
https://api.github.com/repos/huggingface/transformers/issues/2764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2764/comments | https://api.github.com/repos/huggingface/transformers/issues/2764/events | https://github.com/huggingface/transformers/pull/2764 | 561,242,382 | MDExOlB1bGxSZXF1ZXN0MzcyMDg5MjM5 | 2,764 | [examples] rename run_lm_finetuning to run_language_modeling | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great!!"
] | 1,581 | 1,581 | 1,581 | MEMBER | null | And corresponding doc updates | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2764",
"html_url": "https://github.com/huggingface/transformers/pull/2764",
"diff_url": "https://github.com/huggingface/transformers/pull/2764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2764.patch",
"merged_at": 1581084929000
} |
https://api.github.com/repos/huggingface/transformers/issues/2763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2763/comments | https://api.github.com/repos/huggingface/transformers/issues/2763/events | https://github.com/huggingface/transformers/issues/2763 | 561,230,619 | MDU6SXNzdWU1NjEyMzA2MTk= | 2,763 | Add albert-base-v3 to pretrained models? | {
"login": "nicolasahar",
"id": 39612300,
"node_id": "MDQ6VXNlcjM5NjEyMzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/39612300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicolasahar",
"html_url": "https://github.com/nicolasahar",
"followers_url": "https://api.github.com/users/nicolasahar/followers",
"following_url": "https://api.github.com/users/nicolasahar/following{/other_user}",
"gists_url": "https://api.github.com/users/nicolasahar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicolasahar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicolasahar/subscriptions",
"organizations_url": "https://api.github.com/users/nicolasahar/orgs",
"repos_url": "https://api.github.com/users/nicolasahar/repos",
"events_url": "https://api.github.com/users/nicolasahar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicolasahar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, as mentioned in their changelog, the only difference between the v2 and v3 is the compatibility with TF 1.15 as they removed the `einsum` operation.\r\n\r\nIt won't change anything for the huggingface/transformers users as the models available here are only for TF2.\r\n\r\n\r\n"
] | 1,581 | 1,581 | 1,581 | NONE | null | # π Feature request
Albert v3 was recently released on TFHub [here](https://tfhub.dev/google/albert_base/3). Could you please add it to the list of available pretrained models [here](https://huggingface.co/transformers/pretrained_models.html)?
## Motivation
Would provide the community with the most up-to-date albert version.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2763/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2763/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2762/comments | https://api.github.com/repos/huggingface/transformers/issues/2762/events | https://github.com/huggingface/transformers/pull/2762 | 561,215,835 | MDExOlB1bGxSZXF1ZXN0MzcyMDY3MjM3 | 2,762 | Add contributors snapshot | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=h1) Report\n> Merging [#2762](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2762 +/- ##\n======================================\n Coverage 75.1% 75.1% \n======================================\n Files 93 93 \n Lines 15249 15249 \n======================================\n Hits 11452 11452 \n Misses 3797 3797\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=footer). Last update [33d3072...8b6a98e](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice!"
] | 1,581 | 1,581 | 1,581 | MEMBER | null | powered by https://github.com/sourcerer-io/hall-of-fame | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2762/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2762",
"html_url": "https://github.com/huggingface/transformers/pull/2762",
"diff_url": "https://github.com/huggingface/transformers/pull/2762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2762.patch",
"merged_at": 1581020748000
} |
https://api.github.com/repos/huggingface/transformers/issues/2761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2761/comments | https://api.github.com/repos/huggingface/transformers/issues/2761/events | https://github.com/huggingface/transformers/pull/2761 | 561,202,058 | MDExOlB1bGxSZXF1ZXN0MzcyMDU1NzI2 | 2,761 | [docs] Add menu w/ links to other pages on hf.co | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=h1) Report\n> Merging [#2761](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2761 +/- ##\n======================================\n Coverage 75.1% 75.1% \n======================================\n Files 93 93 \n Lines 15249 15249 \n======================================\n Hits 11452 11452 \n Misses 3797 3797\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=footer). Last update [33d3072...e6944e6](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"btw @LysandreJik mkdocs looks really cool :)",
"Yeah I really like mkdocs as well"
] | 1,581 | 1,581 | 1,581 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2761/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2761",
"html_url": "https://github.com/huggingface/transformers/pull/2761",
"diff_url": "https://github.com/huggingface/transformers/pull/2761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2761.patch",
"merged_at": 1581021003000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2760/comments | https://api.github.com/repos/huggingface/transformers/issues/2760/events | https://github.com/huggingface/transformers/pull/2760 | 561,135,192 | MDExOlB1bGxSZXF1ZXN0MzcyMDAxMDM2 | 2,760 | build: add poetry, an alternative to setup.py with dependency versions tracked | {
"login": "aurelien-clu",
"id": 18244614,
"node_id": "MDQ6VXNlcjE4MjQ0NjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/18244614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aurelien-clu",
"html_url": "https://github.com/aurelien-clu",
"followers_url": "https://api.github.com/users/aurelien-clu/followers",
"following_url": "https://api.github.com/users/aurelien-clu/following{/other_user}",
"gists_url": "https://api.github.com/users/aurelien-clu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aurelien-clu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aurelien-clu/subscriptions",
"organizations_url": "https://api.github.com/users/aurelien-clu/orgs",
"repos_url": "https://api.github.com/users/aurelien-clu/repos",
"events_url": "https://api.github.com/users/aurelien-clu/events{/privacy}",
"received_events_url": "https://api.github.com/users/aurelien-clu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not sure why this breaks the CI?\r\n\r\nShouldn't we _not_ version control the .lock file?",
"To me one interesing is to track `.lock` that way you are always certain to have a working versions, i.e. dependencies version that match together.\r\n\r\nIndeed I am not sure why the CI fails :/",
"Pinging @aaugustin our Python-ecosystem/packaging expert on this, but I donβt think we want to commit to maintaining multiple different install systems",
"https://circleci.com/gh/huggingface/transformers/15309?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link\r\n```\r\nbuilder = WheelBuilder(poetry, SystemEnv(Path(sys.prefix)), NullIO())\r\n```\r\n\r\nThe CI now uses `poetry` for `wheels` which was unexpected to me :/\r\n\r\n\r\n`poetry build -f wheel` broken for C extensions\r\n#1332 : https://github.com/python-poetry/poetry/issues/1332\r\n\r\nI am not for changing the way you build, but only give the option to users to be able to manage dependencies using `poetry` :wink: \r\n",
"I have squashed my commits (sorry for the multiple CI runs)\r\n\r\nSome issues where:\r\n\r\n- `python 3.5` is needed (I used 3.6 so it complies with `black`), so its matches your `setup.py`\r\n- email address of one authors was not compliant (needs to be: `\"author <email>\"`)\r\n\r\nNew error:\r\n\r\n```\r\nThe following workers failed to return coverage data, ensure that pytest-cov is installed on these workers.\r\n```\r\n\r\nI am investigating.\r\n\r\nEdit:\r\n\r\nCould it be because I push once more?\r\nBecause it is present in `.circleci/config.yml` and seems to be installed during the `ci` :thinking: \r\nAnd it works for few tests having also `pytest-cov`\r\n\r\nEdit 2: **all good**, must have been committing once more",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=h1) Report\n> Merging [#2760](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **decrease** coverage by `25.36%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2760 +/- ##\n===========================================\n- Coverage 75.1% 49.73% -25.37% \n===========================================\n Files 93 93 \n Lines 15249 15249 \n===========================================\n- Hits 11452 7584 -3868 \n- Misses 3797 7665 +3868\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `0% <0%> (-97.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `0% <0%> (-96.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `0% <0%> (-96.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `0% <0%> (-95.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `0% <0%> (-95.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `0% <0%> (-94.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `0% <0%> (-92.83%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=footer). Last update [33d3072...4ab71c3](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Like @julien-c said, we don't want to maintain both poetry and setuptools configurations, because this is likely to create confusion and waste everyone's time (which already started with CI in this PR).\r\n\r\nSwitching to poetry could be a good move, but then we should ditch `setup.py` entirely and make sure all workflows are still operational.",
"> Like @julien-c said, we don't want to maintain both poetry and setuptools configurations, because this is likely to create confusion and waste everyone's time (which already started with CI in this PR).\r\n> \r\n> Switching to poetry could be a good move, but then we should ditch `setup.py` entirely and make sure all workflows are still operational.\r\n\r\nI am not against you ditch `setup.py` but that's your call :wink: \r\n\r\nAs for not wanting to maintain both `poetry` and `setuptools`, maybe there will be people wanting to use `poetry` and will maintain it themselves. This would not mean any extra work for people not wanting to update `poetry` (even if I think maintaining `poetry` does not require much effort, there was some at the beginning and it has been done :wink: )\r\n",
"Replying to the discussion about the lock file: in pipenv projects I never share the lock file. Yes, you get better (locked) version control but in practice this does not work cross platform at all. Hashes for slightly more complex packages are mostly platform dependent. Installations between colleagues failed because of this. The lock file is a good idea in practice or for in-house deployment but is not useful in the real world, I think. ",
"On the lock file discussion, I think it's not worth it to version it in git for libraries, in general. The pro is an almost reproducible environment. Then con is having to constantly keep it up-to-date for new versions of all the dependencies (including the transitive ones), even for little changes (e.g., tqdm from 4.36.0 to 4.36.1). You could also avoid updating it, but then you'd never catch bugs on new versions. So I think it's good to keep reproducibility on python projects that are not libraries, especially when you want to make sure your code works on production as similar to your env as possible.\r\n\r\nAs an outsider, I see moving to poetry as a good idea. Pros: it works well and fast, specifying the test/dev/docs dependencies, simpler and fewer package configuration files (in theory, only `pyproject.toml`), can check if your env complies with the config file, can specify Python versions, can publish the package easily, more flexibility when specifying the dependencies' versions. The only con I see, apart from learning the tool which should be fast, is that `pip install --editable` wouldn't work as of today for the users.",
"> in practice this does not work cross platform at all\r\n\r\nI agree that's an argument against it.\r\n\r\n> Then con is having to constantly keep it up-to-date for new versions of all the dependencies (including the transitive ones), even for little changes (e.g., tqdm from 4.36.0 to 4.36.1).\r\n\r\nI see no reason why you would need to keep it up-to-date. To me it is simply a (near) guarantee to be able to have a working environment to develop on the project. No matter if you don't have all the latest updates. Most little changes from dependencies have little to no impact on your own development. (library or project)\r\n\r\nAnyhow, feel free to tell me to remove the `.lock` or to close this issue & PR π \r\n",
"> I see no reason why you would need to keep it up-to-date. To me it is simply a (near) guarantee to be able to have a working environment to develop on the project. No matter if you don't have all the latest updates. Most little changes from dependencies have little to no impact on your own development. (library or project)\r\n\r\nThe problem I see is that some dependency versions are gonna stall forever, while actually the latest ones haven't been tried and are more likely to break the codebase.",
"> The problem I see is that some dependency versions are gonna stall forever, while actually the latest ones haven't been tried and are more likely to break the codebase.\r\n\r\nIt does not seem to be a good behavior to add breaking new dependencies π€ (especially in a `lib` with 22k βοΈ )\r\n\r\nAs for stalling ones, `poetry update` most of the time will do the trick (updating your state to a newer working state) and I suppose there should be 1 or several people interested with having a most up to date setup and could contribute it though I may be naive π \r\n",
"> It does not seem to be a good behavior to add breaking new dependencies (especially in a `lib` with 22k )\r\n\r\nNot breaking dependencies, but a dependency version update makes your codebase to break, especially when you have an old version because the one in the codebase is quite old.\r\n\r\n> As for stalling ones, `poetry update` most of the time will do the trick (updating your state to a newer working state) and I suppose there should be 1 or several people interested with having a most up to date setup and could contribute it though I may be naive\r\n\r\nThere's dependabot. But I think it's not worth it, that's my point.",
"A compromise could be that dependabot sends PRs monthly so it's less overwhelming. But I still don't see keeping the test env reproducible as an advantage (it doesn't reflect users' env).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,588 | 1,588 | NONE | null | Hello,
I have written a `pyproject.toml` so your project can be setup using [poetry](https://github.com/python-poetry/poetry) .
That way requirements versions can easily be tracked for people wanting to use **poetry** (it is optional).
# Example
```bash
# first setup your virtual environment, then:
pip install poetry
poetry install # this is equivalent to 'pip install .' but with versions tracked
poetry install --extras testing # pip install -e ".[testing]"
poetry install --extras examples # pip install -r examples/requirements.txt
poetry install --extras torch # pip install -e ".[torch]"
poetry install --extras tf # pip install -e ".[tf]"
# edit: updating dependencies to the latest possible:
poetry update
# adding new dependencies
poetry add MyPyModule
```
# Notes
This does not change any python code, i.e. everything still works :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2760/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2760",
"html_url": "https://github.com/huggingface/transformers/pull/2760",
"diff_url": "https://github.com/huggingface/transformers/pull/2760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2760.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2759/comments | https://api.github.com/repos/huggingface/transformers/issues/2759/events | https://github.com/huggingface/transformers/issues/2759 | 561,083,113 | MDU6SXNzdWU1NjEwODMxMTM= | 2,759 | Loss is calculated on all tokens, including padding, in the LM fine-tuning example | {
"login": "plroit",
"id": 1734563,
"node_id": "MDQ6VXNlcjE3MzQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1734563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plroit",
"html_url": "https://github.com/plroit",
"followers_url": "https://api.github.com/users/plroit/followers",
"following_url": "https://api.github.com/users/plroit/following{/other_user}",
"gists_url": "https://api.github.com/users/plroit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plroit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plroit/subscriptions",
"organizations_url": "https://api.github.com/users/plroit/orgs",
"repos_url": "https://api.github.com/users/plroit/repos",
"events_url": "https://api.github.com/users/plroit/events{/privacy}",
"received_events_url": "https://api.github.com/users/plroit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, `BertForMaskedLM` [does not use an ignore index set to -1](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1018), nor does any other models.\r\n\r\nIt was updated in [2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). The scripts should be run against the latest version of the library.\r\n\r\nIf you want to run against v2.3.0, please use a script from v2.3.0, for example [run_lm_finetuning](https://github.com/huggingface/transformers/blob/v2.3.0/examples/run_lm_finetuning.py).",
"You're right, I mixed up the versions. closing the issue."
] | 1,581 | 1,581 | 1,581 | NONE | null | # π Bug
The BERT fine-tuning example uses a special index to mark ignored locations for the loss function:
`loss_fct = CrossEntropyLoss(ignore_index=-1)`
While in the same example, the masking function that samples locations to be included or excluded uses a different index: -100 (which is the default ignored index for the cross-entropy loss function, if one is not supplied):
`labels[~masked_indices] = -100 # We only compute loss on masked tokens`
Model I am using (Bert, XLNet ...): All models.
Language I am using the model on (English, Chinese ...): All languages
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: LM Finetuning
* [ ] my own task or dataset: (give details below)
## Expected behavior
Loss should be computed only on the 15% (mlm_probability) of sampled tokens.
- `transformers` version: 2.3, 2.4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2759/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2758/comments | https://api.github.com/repos/huggingface/transformers/issues/2758/events | https://github.com/huggingface/transformers/issues/2758 | 561,080,134 | MDU6SXNzdWU1NjEwODAxMzQ= | 2,758 | TFRoberta output with attention_mask changes in version 2.3.0 vs 2.4.1 | {
"login": "btel",
"id": 41565,
"node_id": "MDQ6VXNlcjQxNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/41565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/btel",
"html_url": "https://github.com/btel",
"followers_url": "https://api.github.com/users/btel/followers",
"following_url": "https://api.github.com/users/btel/following{/other_user}",
"gists_url": "https://api.github.com/users/btel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/btel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/btel/subscriptions",
"organizations_url": "https://api.github.com/users/btel/orgs",
"repos_url": "https://api.github.com/users/btel/repos",
"events_url": "https://api.github.com/users/btel/events{/privacy}",
"received_events_url": "https://api.github.com/users/btel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,581 | 1,586 | 1,586 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): not relevant
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```python
import tensorflow as tf
tf.get_logger().setLevel('CRITICAL')
import transformers
print(transformers.__version__)
from transformers import TFRobertaModel, RobertaConfig
from numpy.testing import assert_allclose
config = RobertaConfig()
model = TFRobertaModel(config)
input1 = tf.constant([[5, 3, 4, 8, 7, 1, 6]])
attention_mask1 = tf.constant([[1, 1, 1, 1, 1, 0, 1]])
out1, _ = model({'input_ids': input1, 'attention_mask': attention_mask1})
input2 = tf.constant([[5, 3, 4, 8, 7, 5, 6]])
attention_mask2 = tf.constant([[1, 1, 1, 1, 1, 0, 1]])
out2, _ = model({'input_ids': input2, 'attention_mask': attention_mask2})
assert_allclose(out1.numpy()[:, :5, :], out2.numpy()[:, :5, :])
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: I am using dummy token ids
## To reproduce
Steps to reproduce the behavior:
1. make a new virtualenv
2. install tensorflow
3. pip install transformers=2.3.0
4. save the script in test_mask.py
5. run python test_mask.py
6. repeat 1-5, but in 2 install the latest release: pip install transformers
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
In case of transformers==2.3.0, the test **passes**, giving the following output:
```
2.3.0
```
In case of transformers==2.4.1, the test **fails**:
```
2.4.1
Traceback (most recent call last):
File "test_attention_mask.py", line 21, in <module>
assert_allclose(out1.numpy()[:, :5, :], out2.numpy()[:, :5, :])
File "/home/bartosz/.pyenv/versions/aphp-django/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 1533, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/home/bartosz/.pyenv/versions/aphp-django/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 846, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatched elements: 3840 / 3840 (100%)
Max absolute difference: 0.43364888
Max relative difference: 337.9916
x: array([[[ 0.742064, -1.048889, -1.133795, ..., 1.208201, -0.110544,
-1.556664],
[-0.307906, -0.545374, -1.124657, ..., 0.067571, -0.857922,...
y: array([[[ 0.718682, -0.995075, -1.105745, ..., 1.380688, -0.071943,
-1.627201],
[-0.390375, -0.534317, -1.113236, ..., 0.178188, -0.822041,...
```
## Expected behavior
In my understanding, the test should pass, because the only difference between inputs `input1` and `input2` is oken with index 5 which is masked in both `attention_masks` (i.e. t, `input1[5]==1` while `input2[5]==5`). Note that I don't take the embedding for this token in the comparison of the outputs.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: linux
- Python version: 3.7.4
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): 2.1.0 (no GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2758/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2757/comments | https://api.github.com/repos/huggingface/transformers/issues/2757/events | https://github.com/huggingface/transformers/issues/2757 | 560,974,408 | MDU6SXNzdWU1NjA5NzQ0MDg= | 2,757 | Cannot reproduce SQUAD Example | {
"login": "eblaudez",
"id": 11454249,
"node_id": "MDQ6VXNlcjExNDU0MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11454249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eblaudez",
"html_url": "https://github.com/eblaudez",
"followers_url": "https://api.github.com/users/eblaudez/followers",
"following_url": "https://api.github.com/users/eblaudez/following{/other_user}",
"gists_url": "https://api.github.com/users/eblaudez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eblaudez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eblaudez/subscriptions",
"organizations_url": "https://api.github.com/users/eblaudez/orgs",
"repos_url": "https://api.github.com/users/eblaudez/repos",
"events_url": "https://api.github.com/users/eblaudez/events{/privacy}",
"received_events_url": "https://api.github.com/users/eblaudez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"1. What do you mean \"weird results\"?\r\n2. What do you mean \"v2 dataset it not works anymore\"?\r\n3. Please provide all the information required in the template, i.e. python version, transformers version, torch version etc",
"1. F1 & Exact match:~18 (should be 88/81 no ?)\r\n2. Squad 2.0\r\n3. Python 3.6, last transformer version (clone yesterday), torch 1.4.0\r\n\r\nTensorboard :\r\n\r\n\r\n"
] | 1,580 | 1,581 | 1,581 | NONE | null | I'm not able to reproduce the squad experimentation (via the example). I tried the command line;
python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
That gived very weird results
Then I read a little the forum and I tried:
python3 run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--version_2_with_negative \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 10000 \
--output_dir debug_squad/ \
--overwrite_output_dir
(I also tried with v2 dataset it not works anymore). Can you give me some lead to reproduce the results given in the git readme, or the branch to do it ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2756/comments | https://api.github.com/repos/huggingface/transformers/issues/2756/events | https://github.com/huggingface/transformers/pull/2756 | 560,965,048 | MDExOlB1bGxSZXF1ZXN0MzcxODU5OTMy | 2,756 | BERT decoder: Fix failure with the default attention mask. | {
"login": "osyvokon",
"id": 2910707,
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osyvokon",
"html_url": "https://github.com/osyvokon",
"followers_url": "https://api.github.com/users/osyvokon/followers",
"following_url": "https://api.github.com/users/osyvokon/following{/other_user}",
"gists_url": "https://api.github.com/users/osyvokon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osyvokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osyvokon/subscriptions",
"organizations_url": "https://api.github.com/users/osyvokon/orgs",
"repos_url": "https://api.github.com/users/osyvokon/repos",
"events_url": "https://api.github.com/users/osyvokon/events{/privacy}",
"received_events_url": "https://api.github.com/users/osyvokon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the feedback! That's a valid concern. I made config handling consistent, at least for the BERT tests. But if you decide that it's too much change for such a trivial fix, I can revert the changes in tests.",
"(Seems like CircleCI tests failure is transient and unrelated to the PR)",
"That's one way of solving the issue, but now it makes the BERT tests incoherent with the rest of the tests, which all use tuples instead of dictionaries. For this PR, I believe the most simple would be to revert to using tuples and use this tuple in `test_bert_model_as_decoder_with_default_input_mask`. What do you think?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=h1) Report\n> Merging [#2756](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1f5db9a13c8932e02e6e7d599a16dc262b1570bf?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2756 +/- ##\n=======================================\n Coverage 75.02% 75.02% \n=======================================\n Files 93 93 \n Lines 15275 15275 \n=======================================\n Hits 11460 11460 \n Misses 3815 3815\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.9% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ΓΈ)` | :arrow_up: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=footer). Last update [1f5db9a...b5b92ed](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik, I agree. Reverted unnecessary changes in tests.",
"Great, thanks @asivokon !!"
] | 1,580 | 1,581 | 1,581 | CONTRIBUTOR | null | PyTorch < 1.3 requires multiplication operands to be of the same type. This was violated when using default attention mask (i.e.., `attention_mask=None` in arguments) given BERT in the decoder mode.
In particular, this was breaking `Model2Mode`l and made a tutorial from quickstart.md failing.
A test is included, but here is a minimal snippet to reproduce:
```python
import torch
from transformers import BertModel
model = BertModel.from_pretrained("bert-base-uncased", is_decoder=True)
inputs = torch.LongTensor([[1, 2, 3]])
model(inputs) # no `attention_mask` provided
```
On PyTorch 1.2 or older this was failing with
```
Traceback (most recent call last):
...
File "/home/oleksiy.syvokon/transformers/src/transformers/modeling_bert.py", line 735, in forward
extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Long
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2756/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2756",
"html_url": "https://github.com/huggingface/transformers/pull/2756",
"diff_url": "https://github.com/huggingface/transformers/pull/2756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2756.patch",
"merged_at": 1581452363000
} |
https://api.github.com/repos/huggingface/transformers/issues/2755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2755/comments | https://api.github.com/repos/huggingface/transformers/issues/2755/events | https://github.com/huggingface/transformers/issues/2755 | 560,874,868 | MDU6SXNzdWU1NjA4NzQ4Njg= | 2,755 | Multi-text files support for run_lm_finetuning | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | CONTRIBUTOR | null | # π Feature request
Support multi-text files for run_lm_finetuning.
## Motivation
Currently, you support training from scratch but it only supports a single file. Usually, when we train from scratch we train a model using multi-text files, not a single text file.
It will be great to support multi-text file and maybe separate finetuning from training from scratch files.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2755/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2754/comments | https://api.github.com/repos/huggingface/transformers/issues/2754/events | https://github.com/huggingface/transformers/pull/2754 | 560,870,986 | MDExOlB1bGxSZXF1ZXN0MzcxNzgzMjIx | 2,754 | Changed vocabulary save function. Variable name was inconsistent | {
"login": "dchurchwell",
"id": 43887246,
"node_id": "MDQ6VXNlcjQzODg3MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/43887246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dchurchwell",
"html_url": "https://github.com/dchurchwell",
"followers_url": "https://api.github.com/users/dchurchwell/followers",
"following_url": "https://api.github.com/users/dchurchwell/following{/other_user}",
"gists_url": "https://api.github.com/users/dchurchwell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dchurchwell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dchurchwell/subscriptions",
"organizations_url": "https://api.github.com/users/dchurchwell/orgs",
"repos_url": "https://api.github.com/users/dchurchwell/repos",
"events_url": "https://api.github.com/users/dchurchwell/events{/privacy}",
"received_events_url": "https://api.github.com/users/dchurchwell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thank you for taking the time to fix it!"
] | 1,580 | 1,581 | 1,581 | NONE | null | Caused an error to be thrown when passing a file name instead of a directory.
UnboundLocalError: local variable 'vocab_file' referenced before assignment
Associated with issue #2753
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2754/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2754",
"html_url": "https://github.com/huggingface/transformers/pull/2754",
"diff_url": "https://github.com/huggingface/transformers/pull/2754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2754.patch",
"merged_at": 1581025208000
} |
https://api.github.com/repos/huggingface/transformers/issues/2753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2753/comments | https://api.github.com/repos/huggingface/transformers/issues/2753/events | https://github.com/huggingface/transformers/issues/2753 | 560,866,156 | MDU6SXNzdWU1NjA4NjYxNTY= | 2,753 | Saving tokenizer vocabulary throws error when passing file name instead of directory. | {
"login": "dchurchwell",
"id": 43887246,
"node_id": "MDQ6VXNlcjQzODg3MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/43887246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dchurchwell",
"html_url": "https://github.com/dchurchwell",
"followers_url": "https://api.github.com/users/dchurchwell/followers",
"following_url": "https://api.github.com/users/dchurchwell/following{/other_user}",
"gists_url": "https://api.github.com/users/dchurchwell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dchurchwell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dchurchwell/subscriptions",
"organizations_url": "https://api.github.com/users/dchurchwell/orgs",
"repos_url": "https://api.github.com/users/dchurchwell/repos",
"events_url": "https://api.github.com/users/dchurchwell/events{/privacy}",
"received_events_url": "https://api.github.com/users/dchurchwell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closed by #2754 "
] | 1,580 | 1,581 | 1,581 | NONE | null | # π Bug
## Information
Using transfo-xl-wt103
UnboundLocalError: local variable 'vocab_file' referenced before assignment
## To reproduce
Steps to reproduce the behavior:
```
from transformers import TransfoXLTokenizer, TFTransfoXLLMHeadModel
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tokenizer.save_vocabulary('vocab.txt')
```
## Pull Request
https://github.com/huggingface/transformers/pull/2754
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2753/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2752/comments | https://api.github.com/repos/huggingface/transformers/issues/2752/events | https://github.com/huggingface/transformers/pull/2752 | 560,826,668 | MDExOlB1bGxSZXF1ZXN0MzcxNzQ3MTMy | 2,752 | Fix multi-gpu evaluation in run_glue.py example | {
"login": "peteriz",
"id": 232524,
"node_id": "MDQ6VXNlcjIzMjUyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/232524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peteriz",
"html_url": "https://github.com/peteriz",
"followers_url": "https://api.github.com/users/peteriz/followers",
"following_url": "https://api.github.com/users/peteriz/following{/other_user}",
"gists_url": "https://api.github.com/users/peteriz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peteriz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peteriz/subscriptions",
"organizations_url": "https://api.github.com/users/peteriz/orgs",
"repos_url": "https://api.github.com/users/peteriz/repos",
"events_url": "https://api.github.com/users/peteriz/events{/privacy}",
"received_events_url": "https://api.github.com/users/peteriz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=h1) Report\n> Merging [#2752](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d87eafd118739a4c121d69d7cff425264f01e1c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2752 +/- ##\n=======================================\n Coverage 74.51% 74.51% \n=======================================\n Files 87 87 \n Lines 14920 14920 \n=======================================\n Hits 11117 11117 \n Misses 3803 3803\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=footer). Last update [9d87eaf...31218ea](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks!"
] | 1,580 | 1,581 | 1,581 | CONTRIBUTOR | null | Fix multi-gpu evaluation while training in `examples/run_glue.py` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2752/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2752",
"html_url": "https://github.com/huggingface/transformers/pull/2752",
"diff_url": "https://github.com/huggingface/transformers/pull/2752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2752.patch",
"merged_at": 1581025136000
} |
https://api.github.com/repos/huggingface/transformers/issues/2751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2751/comments | https://api.github.com/repos/huggingface/transformers/issues/2751/events | https://github.com/huggingface/transformers/issues/2751 | 560,751,124 | MDU6SXNzdWU1NjA3NTExMjQ= | 2,751 | Sentence pair classification | {
"login": "Mahmedturk",
"id": 48975334,
"node_id": "MDQ6VXNlcjQ4OTc1MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/48975334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mahmedturk",
"html_url": "https://github.com/Mahmedturk",
"followers_url": "https://api.github.com/users/Mahmedturk/followers",
"following_url": "https://api.github.com/users/Mahmedturk/following{/other_user}",
"gists_url": "https://api.github.com/users/Mahmedturk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mahmedturk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mahmedturk/subscriptions",
"organizations_url": "https://api.github.com/users/Mahmedturk/orgs",
"repos_url": "https://api.github.com/users/Mahmedturk/repos",
"events_url": "https://api.github.com/users/Mahmedturk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mahmedturk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | # β Questions & Help
Hi,
I want to do sentence pair classification on Quora Questions Dataset by fine-tuning BERT. I am new to this and do not know where to start? Can anyone let me know how do i get started with this?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2750/comments | https://api.github.com/repos/huggingface/transformers/issues/2750/events | https://github.com/huggingface/transformers/issues/2750 | 560,677,671 | MDU6SXNzdWU1NjA2Nzc2NzE= | 2,750 | default output of BertModel.from_pretrained('bert-base-uncased') | {
"login": "xiaolin-cheng",
"id": 16944705,
"node_id": "MDQ6VXNlcjE2OTQ0NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16944705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaolin-cheng",
"html_url": "https://github.com/xiaolin-cheng",
"followers_url": "https://api.github.com/users/xiaolin-cheng/followers",
"following_url": "https://api.github.com/users/xiaolin-cheng/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaolin-cheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaolin-cheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaolin-cheng/subscriptions",
"organizations_url": "https://api.github.com/users/xiaolin-cheng/orgs",
"repos_url": "https://api.github.com/users/xiaolin-cheng/repos",
"events_url": "https://api.github.com/users/xiaolin-cheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaolin-cheng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"See https://huggingface.co/transformers/v1.2.0/_modules/pytorch_transformers/modeling_bert.html#BertModel\r\n\r\nI think this is your output[1]:\r\n\"Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually **not** a good summary of the semantic content of the input, youβre often better with averaging or pooling the sequence of hidden-states for the whole input sequence.\"",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,587 | 1,587 | NONE | null | By default `output = BertModel.from_pretrained('bert-base-uncased')` is a 2-tuple where `output[0]` is the hidden states of the last layer, but how is `output[1]` computed? It doesn't seem to be average of the last layer hidden states vectors over multiple tokens. I am trying to leverage output as sentence embedding, not sure if I should use `output[1]`. Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2750/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2749/comments | https://api.github.com/repos/huggingface/transformers/issues/2749/events | https://github.com/huggingface/transformers/pull/2749 | 560,633,887 | MDExOlB1bGxSZXF1ZXN0MzcxNTg5NDA1 | 2,749 | Upgrade run_generation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=h1) Report\n> Merging [#2749](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ada24def22199459d8c1decc311dfe8dae7a7d8c?src=pr&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `25%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2749 +/- ##\n==========================================\n- Coverage 75.1% 75.07% -0.03% \n==========================================\n Files 93 93 \n Lines 15249 15255 +6 \n==========================================\n Hits 11452 11452 \n- Misses 3797 3803 +6\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `60.7% <0%> (-0.63%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.46% <100%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=footer). Last update [ada24de...f2bcc91](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Changes have been added in PR #2885."
] | 1,580 | 1,651 | 1,582 | MEMBER | null | The script `run_generation` has a few issues that I aim to fix in this PR:
- [x] The XLNet and XLM generations are broken (crash)
- [x] An end of sequence token is added to all sequences, even when models don't have that token, and results in weird end of sequences.
- [x] No way to generate multiple sequences at a time as it was possible before
- [x] The `length` parameter doesn't take into account the prompt length.
- [x] The prompt is concatenated to the generated sequence, which results in concatenating the initial text for XLNet.
- [x] Actually implement languages for XLM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2749/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2749",
"html_url": "https://github.com/huggingface/transformers/pull/2749",
"diff_url": "https://github.com/huggingface/transformers/pull/2749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2749.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2748/comments | https://api.github.com/repos/huggingface/transformers/issues/2748/events | https://github.com/huggingface/transformers/issues/2748 | 560,592,068 | MDU6SXNzdWU1NjA1OTIwNjg= | 2,748 | TFAlbertModelTest::test_pt_tf_model_equivalence -> Fatal Python Error on Mac | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"#2240 was an error with DistilBERT and was fixed with https://github.com/huggingface/transformers/commit/ea2600bd5f1d36f2fb61958be21db5b901e33884 \r\n\r\nDoes this error happen every time you run the test suite?",
"yes!",
"I'm running on Darwin 19.2, Python 3.7.5, torch 1.3.1, tensorflow 2.0.0 and transformers from source and I can't replicate this bug π\r\n\r\nI'm thinking this may be due to a memory issue but it's hard to say given the cryptic error message",
"I bumped tensorflow to 2.1 and cant replicate this failure **or** the flaky CircleCI test #2781\r\n- `transformers` version: 2.4.1\r\n- Platform: Darwin-19.0.0-x86_64-i386-64bit\r\n- Python version: 3.7.5\r\n- PyTorch version (GPU?): 1.4.0 (False)\r\n- Tensorflow version (GPU?): 2.1.0 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>",
"I also just tried to use python 3.5 and can't replicate.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,587 | 1,587 | CONTRIBUTOR | null | Running the unit tests locally on mac, I get "Fatal Python error: Aborted"
To reproduce, try `pytest tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_pt_tf_model_equivalence `
### Environment Info
- `transformers` version: 2.4.1
- Platform: Darwin-19.0.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
### Traceback
```
tests/test_modeling_tf_albert.py Fatal Python error: Aborted
Current thread 0x0000000110d2adc0 (most recent call first):
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/functional.py", line 1372 in linear
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87 in forward
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541 in __call__
File "/Users/shleifer/transformers_fork/src/transformers/modeling_albert.py", line 321 in forward
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541 in __call__
File "/Users/shleifer/transformers_fork/src/transformers/modeling_albert.py", line 566 in forward
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541 in __call__
File "/Users/shleifer/transformers_fork/tests/test_modeling_tf_common.py", line 111 in test_pt_tf_model_equivalence
```
https://github.com/huggingface/transformers/issues/2240 has a different error message from a similar test.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2748/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2747/comments | https://api.github.com/repos/huggingface/transformers/issues/2747/events | https://github.com/huggingface/transformers/pull/2747 | 560,558,777 | MDExOlB1bGxSZXF1ZXN0MzcxNTI3NTk4 | 2,747 | Arxiv README | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=h1) Report\n> Merging [#2747](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2184f87003c18ad8a172ecab9a821626522cf8e7?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2747 +/- ##\n======================================\n Coverage 75.1% 75.1% \n======================================\n Files 93 93 \n Lines 15249 15249 \n======================================\n Hits 11452 11452 \n Misses 3797 3797\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=footer). Last update [2184f87...69d18f4](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LOTM\r\n\r\n(= Looks outstanding to me)",
"I really did my best"
] | 1,580 | 1,580 | 1,580 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2747/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2747",
"html_url": "https://github.com/huggingface/transformers/pull/2747",
"diff_url": "https://github.com/huggingface/transformers/pull/2747.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2747.patch",
"merged_at": 1580934388000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2746/comments | https://api.github.com/repos/huggingface/transformers/issues/2746/events | https://github.com/huggingface/transformers/pull/2746 | 560,528,138 | MDExOlB1bGxSZXF1ZXN0MzcxNTAyNTcz | 2,746 | Added CamembertForQuestionAnswering | {
"login": "maximeilluin",
"id": 60709375,
"node_id": "MDQ6VXNlcjYwNzA5Mzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/60709375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximeilluin",
"html_url": "https://github.com/maximeilluin",
"followers_url": "https://api.github.com/users/maximeilluin/followers",
"following_url": "https://api.github.com/users/maximeilluin/following{/other_user}",
"gists_url": "https://api.github.com/users/maximeilluin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximeilluin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximeilluin/subscriptions",
"organizations_url": "https://api.github.com/users/maximeilluin/orgs",
"repos_url": "https://api.github.com/users/maximeilluin/repos",
"events_url": "https://api.github.com/users/maximeilluin/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximeilluin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=h1) Report\n> Merging [#2746](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2184f87003c18ad8a172ecab9a821626522cf8e7?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2746 +/- ##\n=========================================\n+ Coverage 75.1% 75.1% +<.01% \n=========================================\n Files 93 93 \n Lines 15249 15253 +4 \n=========================================\n+ Hits 11452 11456 +4 \n Misses 3797 3797\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `29.18% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=footer). Last update [2184f87...74c277a](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c Could you please review ;)"
] | 1,580 | 1,582 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2746",
"html_url": "https://github.com/huggingface/transformers/pull/2746",
"diff_url": "https://github.com/huggingface/transformers/pull/2746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2746.patch",
"merged_at": 1582304463000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2745/comments | https://api.github.com/repos/huggingface/transformers/issues/2745/events | https://github.com/huggingface/transformers/pull/2745 | 560,511,070 | MDExOlB1bGxSZXF1ZXN0MzcxNDg4NTE0 | 2,745 | Add BartModel | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=h1) Report\n> Merging [#2745](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/564fd75d65e66d3ac2a7c39558aa1079c9845152?src=pr&el=desc) will **increase** coverage by `0.76%`.\n> The diff coverage is `84.39%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2745 +/- ##\n=========================================\n+ Coverage 75.34% 76.1% +0.76% \n=========================================\n Files 94 98 +4 \n Lines 15440 15946 +506 \n=========================================\n+ Hits 11633 12136 +503 \n- Misses 3807 3810 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `26.66% <ΓΈ> (+1.36%)` | :arrow_up: |\n| [src/transformers/utils\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlsc19lbmNvZGVyX2RlY29kZXIucHk=) | `0% <0%> (ΓΈ)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <100%> (-0.07%)` | :arrow_down: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `73.6% <100%> (+12.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <100%> (+0.23%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100% <100%> (ΓΈ)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100% <100%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.29% <100%> (+0.07%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=footer). Last update [564fd75...6db143e](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I marked some things \"resolved\" that I've done locally so that I can keep track. Pls advise if it is confusing/not the correct style!",
"> I marked some things \"resolved\" that I've done locally so that I can keep track. Pls advise if it is confusing/not the correct style!\r\n\r\nIt's ok but obviously I can't discuss the new changes then."
] | 1,580 | 1,601 | 1,582 | CONTRIBUTOR | null | This ports BART, a "sequence-to-sequence model trained with denoising as pretraining objective." from https://github.com/pytorch/fairseq/tree/master/examples/bart
The decoder is left-to-right, the encoder is biredictional. As such, the code only uses a causal attention mask in the decoder.
### TODO:
- [x] conversion of pretrained weights
- [x] some unit testing
- [x] inference produces the same results as the fairseq version.
- [x] decide on signature/splitting of encoder, decoder arguments (see https://github.com/huggingface/transformers/blob/808bbd5a6abe5b26656ffd809ce0e753495c912a/src/transformers/modeling_encoder_decoder.py#L240
)
- [x] Docstrings
- [x] More comments for code readers
### Future PRs
- [ ] example with correct pretraining objective
- [ ] `BartForSummarization.from_pretrained('bart-large-cnn')`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2745",
"html_url": "https://github.com/huggingface/transformers/pull/2745",
"diff_url": "https://github.com/huggingface/transformers/pull/2745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2745.patch",
"merged_at": 1582240274000
} |
https://api.github.com/repos/huggingface/transformers/issues/2744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2744/comments | https://api.github.com/repos/huggingface/transformers/issues/2744/events | https://github.com/huggingface/transformers/issues/2744 | 560,335,833 | MDU6SXNzdWU1NjAzMzU4MzM= | 2,744 | Albert language model fine tuning not running run_lm_finetuning.py | {
"login": "abdallah197",
"id": 28394606,
"node_id": "MDQ6VXNlcjI4Mzk0NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/28394606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdallah197",
"html_url": "https://github.com/abdallah197",
"followers_url": "https://api.github.com/users/abdallah197/followers",
"following_url": "https://api.github.com/users/abdallah197/following{/other_user}",
"gists_url": "https://api.github.com/users/abdallah197/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdallah197/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdallah197/subscriptions",
"organizations_url": "https://api.github.com/users/abdallah197/orgs",
"repos_url": "https://api.github.com/users/abdallah197/repos",
"events_url": "https://api.github.com/users/abdallah197/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdallah197/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"@thomwolf can you give any insights regarding this?",
"how much lines in `test.txt`?",
"1,041,130 line",
"I have a similar issue finetuning the language model with bert. In the end, I had to scale down my training to ~200,000 lines to make it work, which is a very small proportion of my original dataset.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,591 | 1,591 | NONE | null | # β Questions & Help
## Information
Model I am using (Albert(all types)):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
the code returns memory allocation problems when run with any version from albert. i tried to reduce the sequence length and batch size to a minum setting, but the issue still arises. my setting and the minimized setting both run normally with bert or roberta, the issue arises only when i change the model to Albert.
an example:
`tcmalloc: large alloc 1951195136 bytes == 0x7f750f664000 @ 0x7f76efbf8887 0x7f764c2a1b79 0x7f764c29fb0f 0x7f764c29fc33 0x7f764c26a155 0x7f764c26837e 0x7f764c26bbb1 0x7f764c2606df 0x50a8af 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9 0x508245 0x509642 0x595311 0x5a067e 0x50d966 0x58efc9 0x4c9546 0x5886f4 0x58892e 0x551b81 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x508245`
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
language model finetuning for albert
## To reproduce
Steps to reproduce the behavior:
1. in run_lm_finetuning add:
` from transformers import (AlbertConfig,
AlbertForMaskedLM,
AlbertTokenizer,
)`
2.add to MODEL_CLASSES dictionary:
` "albert": (AlbertConfig, AlbertForMaskedLM, AlbertTokenizer),`
3. add file text.txt, a similar txt file to the wiki dataset that's mentioned in the docs.
4.run the finetuning script:
`python transformers/examples/run_lm_finetuning.py \
--output_dir=output \
--model_type=albert \
--model_name_or_path=albert-base-v1 \
--do_train \
--train_data_file test.txt \
--block_size 50 \
--per_gpu_train_batch_size 2 \
--max_steps 520000 \
--weight_decay 0.01 \
--logging_steps 5000 \
--mlm`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS: Google colab
* Python version: 3.7
* PyTorch version: 1.3.1
* `transformers` version (or branch): latest
* Using GPU ? yes
* Distributed or parallel setup ? no
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2744/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2743/comments | https://api.github.com/repos/huggingface/transformers/issues/2743/events | https://github.com/huggingface/transformers/issues/2743 | 560,323,253 | MDU6SXNzdWU1NjAzMjMyNTM= | 2,743 | PreTrainedEncoderDecoder keeps giving me the same next token | {
"login": "xiankgx",
"id": 4113258,
"node_id": "MDQ6VXNlcjQxMTMyNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4113258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiankgx",
"html_url": "https://github.com/xiankgx",
"followers_url": "https://api.github.com/users/xiankgx/followers",
"following_url": "https://api.github.com/users/xiankgx/following{/other_user}",
"gists_url": "https://api.github.com/users/xiankgx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiankgx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiankgx/subscriptions",
"organizations_url": "https://api.github.com/users/xiankgx/orgs",
"repos_url": "https://api.github.com/users/xiankgx/repos",
"events_url": "https://api.github.com/users/xiankgx/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiankgx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm using both \"bert-base-uncased\" for both encoder and decoder.",
"> I think I found the problem. I moved the code inside the train_step outside to the enclosing function and it seems to work.\r\n\r\nHi, I am having the same problem, what solved it for you?",
"> > I think I found the problem. I moved the code inside the train_step outside to the enclosing function and it seems to work.\r\n> \r\n> Hi, I am having the same problem, what solved it for you?\r\n\r\nHi, the last time I tried, I was able to got it to work by training it at a lower learning rate and training for more iterations. Try troubleshooting the code by lowering the number of samples, and try to overfit to the training set by training it for more iterations. The loss should go down to at least less than 0.6. Proceed with the full dataset only when things work."
] | 1,580 | 1,582 | 1,580 | NONE | null | Hi, I am trying to use PreTrainedEncoderDecoder to train a seq2seq model. I have a working training code, but I'm not sure if I'm doing things correctly because the decoded token is always the same token during inference.
The data is paired input and target sentence pairs.
The dataset class looks like this:
```
class LineByLineLabelledTextDataset(Dataset):
"""Labelled text dataset where a line corresponds to a sample."""
def __init__(self,
lines,
tokenizer,
sep="|||",
max_seqlen=512):
self.lines = lines
self.tokenizer = tokenizer
self.sep = sep
self.max_seqlen = max_seqlen
def __len__(self):
return len(self.lines)
def __getitem__(self, i):
splitted = self.lines[i].split(self.sep)
input, target = splitted[0], splitted[1]
# target += " [GEN_STOP]"
input_dict = self.tokenizer.encode_plus(input,
max_length=self.max_seqlen,
pad_to_max_length=True)
target_dict = self.tokenizer.encode_plus(target,
max_length=self.max_seqlen,
pad_to_max_length=True)
return torch.tensor(input_dict["input_ids"]), torch.tensor(target_dict["input_ids"]), torch.tensor(input_dict["attention_mask"]), torch.tensor(target_dict["attention_mask"])
```
The training function for one step looks like this:
```
def train_batch(batch, model, optimizer, device, phase="train"):
input_ids = batch[0].to(device)
target_ids = batch[1].to(device)
input_attention_mask = batch[2].to(device)
target_attention_mask = batch[3].to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == "train"):
outputs = model(input_ids, target_ids,
encoder_attention_mask=input_attention_mask,
decoder_attention_mask=target_attention_mask,
decoder_lm_labels=target_ids)
lm_loss = outputs[0]
loss = lm_loss
loss.backward()
optimizer.step()
return loss
```
The decode function looks like this
```
def decode(encoder_input_text, model, tokenizer, max_length=20):
model.eval()
text = encoder_input_text
generated_text = "[CLS]"
while len(generated_text.split()) < max_length:
encoder_input_ids = tokenizer.encode(text)
encoder_input_tensor = torch.tensor([encoder_input_ids])
print(f"encoder_input_tensor: {encoder_input_tensor}")
decoder_input_ids = tokenizer.encode(generated_text, add_special_tokens=False)
decoder_input_tensor = torch.tensor([decoder_input_ids])
print(f"decoder_input_tensor: {decoder_input_tensor}")
with torch.no_grad():
outputs = model(encoder_input_ids=encoder_input_tensor, decoder_input_ids=decoder_input_tensor)
predictions = outputs[0]
predicted_index = torch.argmax(predictions[0, -1]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
generated_text += " " + predicted_token
print(generated_text)
print(len(generated_text.split()))
if len(generated_text.split()) >= max_length:
break
return generated_text
```
I see the training loss goes down a bit during training. I don't know what I'm doing wrong. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2743/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2742/comments | https://api.github.com/repos/huggingface/transformers/issues/2742/events | https://github.com/huggingface/transformers/issues/2742 | 560,319,688 | MDU6SXNzdWU1NjAzMTk2ODg= | 2,742 | do_lower_case strips accents! | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Well - is this realy a bug or just an improvement of the documentation?",
"In my opinion it's a bug as it's misleading. If you want to do lower that doesn't mean you want to strip accents too. Those are two separate actions which the user should decide separately. "
] | 1,580 | 1,596 | 1,586 | NONE | null | # π Bug
When calling BertTokenizer with do_lower_case=True, the tokenizer gets rid of accents, which is a misleading behavior not indicated in the name of the parameter. We suggest that you create another parameter which indicates whether or not to strip accents, separated from do_lower_case! This also happens in AutoTokenizer. For some languages, like spanish, this is crucial (hacia is not the same as hacΓa). Moreover, it's set to True by default.
https://github.com/huggingface/transformers/blob/2184f87003c18ad8a172ecab9a821626522cf8e7/src/transformers/tokenization_bert.py#L346
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2742/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2742/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2741/comments | https://api.github.com/repos/huggingface/transformers/issues/2741/events | https://github.com/huggingface/transformers/issues/2741 | 560,243,422 | MDU6SXNzdWU1NjAyNDM0MjI= | 2,741 | XLM-Roberta mask filling error | {
"login": "sultanovazamat",
"id": 26954978,
"node_id": "MDQ6VXNlcjI2OTU0OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/26954978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sultanovazamat",
"html_url": "https://github.com/sultanovazamat",
"followers_url": "https://api.github.com/users/sultanovazamat/followers",
"following_url": "https://api.github.com/users/sultanovazamat/following{/other_user}",
"gists_url": "https://api.github.com/users/sultanovazamat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sultanovazamat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sultanovazamat/subscriptions",
"organizations_url": "https://api.github.com/users/sultanovazamat/orgs",
"repos_url": "https://api.github.com/users/sultanovazamat/repos",
"events_url": "https://api.github.com/users/sultanovazamat/events{/privacy}",
"received_events_url": "https://api.github.com/users/sultanovazamat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Found solution in [#2509](https://github.com/huggingface/transformers/pull/2509).",
"Hi, indeed this is an error. This will be fixed once #3198 is merged."
] | 1,580 | 1,584 | 1,584 | NONE | null | # XLM-Roberta mask token filling error
Hi! I am trying to use XLM-Roberta for Masked LM task, but the error occurs when the model fills masked token in a test sentence.
**The code is:**
```
config.model_name = 'xlm-roberta-base'
tokenizer: tr.XLMRobertaTokenizer = tr.XLMRobertaTokenizer.from_pretrained(config.model_name)
model: tr.XLMRobertaForMaskedLM = tr.XLMRobertaForMaskedLM.from_pretrained(config.model_name)
input_ids = tokenizer.encode_plus("I want to <mask> New York!",
max_length=config.max_length)['input_ids']
x = np.full((config.max_length), fill_value=tokenizer.pad_token_id)
attn = np.zeros_like(x)
for i, tok in enumerate(input_ids):
x[i] = tok
attn[i] = 1
x = torch.tensor(x).unsqueeze(0).to(device)
attn = torch.tensor(attn).unsqueeze(0).to(device)
outputs = model(x, attention_mask=attn, masked_lm_labels=x)
```
**The error is**
```
RuntimeError: cublas runtime error : library not initialized at ../aten/src/THC/THCGeneral.cpp:216
```
When I try Albert for similar task everything works fine, but the Roberta family doesn't.
Could you please help with this issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2741/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2740/comments | https://api.github.com/repos/huggingface/transformers/issues/2740/events | https://github.com/huggingface/transformers/issues/2740 | 560,205,858 | MDU6SXNzdWU1NjAyMDU4NTg= | 2,740 | T5 | {
"login": "MVsualStdio",
"id": 40512829,
"node_id": "MDQ6VXNlcjQwNTEyODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/40512829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MVsualStdio",
"html_url": "https://github.com/MVsualStdio",
"followers_url": "https://api.github.com/users/MVsualStdio/followers",
"following_url": "https://api.github.com/users/MVsualStdio/following{/other_user}",
"gists_url": "https://api.github.com/users/MVsualStdio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MVsualStdio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MVsualStdio/subscriptions",
"organizations_url": "https://api.github.com/users/MVsualStdio/orgs",
"repos_url": "https://api.github.com/users/MVsualStdio/repos",
"events_url": "https://api.github.com/users/MVsualStdio/events{/privacy}",
"received_events_url": "https://api.github.com/users/MVsualStdio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Your attention_mask should be torch.tensor([[1,1,1,1,1,1]]",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,589 | 1,589 | NONE | null | the code is here
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5WithLMHeadModel.from_pretrained('t5-small')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
print(tokenizer._tokenize("Hello, my dog is cute"))
print(input_ids)
print(input_ids.shape)
outputs = model(input_ids=input_ids,attention_mask=torch.tensor([5.]))
print(outputs[0].shape)
and the error is
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
UnboundLocalError: local variable 'extended_attention_mask' referenced before assignment
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2740/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2740/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2739/comments | https://api.github.com/repos/huggingface/transformers/issues/2739/events | https://github.com/huggingface/transformers/issues/2739 | 560,154,550 | MDU6SXNzdWU1NjAxNTQ1NTA= | 2,739 | Development Infrastructure for ML Projects | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is an interesting subject but is way too broad for discussing here",
"@julien-c any forum you know for having this discussion?"
] | 1,580 | 1,580 | 1,580 | NONE | null | Hi guys, I am sorry to post an issue that is a bit outside the scope of this project. But I have been a consistent watcher of the transformers project and it is excellent on how I can collaborate and coordinate with people and develop something.
But I am a rookie when it comes to working with people and get going with a project and I have been assigned a task to create reliable and scalable infrastructure, where a team can have space for research, development, test, deploy.
I have been dabbling around with bitbucket pipelines and docker but it would be helpful to get your opinions on it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2739/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2738/comments | https://api.github.com/repos/huggingface/transformers/issues/2738/events | https://github.com/huggingface/transformers/pull/2738 | 560,010,223 | MDExOlB1bGxSZXF1ZXN0MzcxMDc2MzY2 | 2,738 | Fix GPT2 config set to trainable | {
"login": "neonbjb",
"id": 833082,
"node_id": "MDQ6VXNlcjgzMzA4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/833082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neonbjb",
"html_url": "https://github.com/neonbjb",
"followers_url": "https://api.github.com/users/neonbjb/followers",
"following_url": "https://api.github.com/users/neonbjb/following{/other_user}",
"gists_url": "https://api.github.com/users/neonbjb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neonbjb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neonbjb/subscriptions",
"organizations_url": "https://api.github.com/users/neonbjb/orgs",
"repos_url": "https://api.github.com/users/neonbjb/repos",
"events_url": "https://api.github.com/users/neonbjb/events{/privacy}",
"received_events_url": "https://api.github.com/users/neonbjb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=h1) Report\n> Merging [#2738](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e5b549b4d47678bdc74bc8f650e82cf25bfc245?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2738 +/- ##\n=======================================\n Coverage 74.09% 74.09% \n=======================================\n Files 93 93 \n Lines 15249 15249 \n=======================================\n Hits 11298 11298 \n Misses 3951 3951\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2738/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <100%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=footer). Last update [9e5b549...5346295](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | There's currently a bug in the GPT2 model which prevents it from being saved. This is caused by setting the trainable parameter to the GPT2 config, which cannot be packaged later in the save pipeline. Gotta love python...
Here is a simple script which you can use to reproduce this bug (and check the fix):
```
from transformers import (TFGPT2Model)
if __name__ == '__main__':
_base_model = TFGPT2Model.from_pretrained("gpt2")
print(base_model._layers[0].trainable)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2738",
"html_url": "https://github.com/huggingface/transformers/pull/2738",
"diff_url": "https://github.com/huggingface/transformers/pull/2738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2738.patch",
"merged_at": 1580928942000
} |
https://api.github.com/repos/huggingface/transformers/issues/2737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2737/comments | https://api.github.com/repos/huggingface/transformers/issues/2737/events | https://github.com/huggingface/transformers/issues/2737 | 560,001,464 | MDU6SXNzdWU1NjAwMDE0NjQ= | 2,737 | Version 2.4.1 breaks run_lm_finetuning.py, version 2.3.0 runs fine | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, can you check that the current version of `run_lm_finetuning` crashes on your side by pulling the latest repo version? The `run_lm_finetuning` script was updated 30 minutes ago in regard to that error.",
"Ah yes, it works, specifically switching into this line\r\n\r\n`labels[~masked_indices] = -1 `\r\n\r\nto this line\r\n\r\n`labels[~masked_indices] = -100`",
"Glad it works, thanks for checking.",
"I wonder what does it mean for the rest of the code base. Are masked tokens now -100 instead of -1?",
"Yes, since [v2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). The reason is explained in the \"Ignored indices in PyTorch loss computing\" section in the previous link.",
"> Yes, since [v2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). The reason is explained in the \"Ignored indices in PyTorch loss computing\" section in the previous link.\r\n\r\nwhere is the link?",
"The link is the [v2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). You can click on it."
] | 1,580 | 1,581 | 1,580 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
Maksed language modeling
https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py
The tasks I am working on is:
* [ x] my own task or dataset: (give details below)
I am running on this dataset (though I doubt the issue is with the dataset, just use any text file)
https://drive.google.com/open?id=18oogYKR-VCQlFyUaYcGfgDiKTrFtkTHn
## To reproduce
Steps to reproduce the behavior:
```
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
python run_lm_finetuning.py --train_data_file train.raw --output_dir /output --model_type 'bert' --mlm --model_name_or_path 'bert-base-uncased' --do_train
```
without cuda for different error message
```
python run_lm_finetuning.py --train_data_file train.raw --output_dir /output --model_type 'bert' --mlm --model_name_or_path 'bert-base-uncased' --do_train --no_cuda
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error message when using CUDA
```
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/17 [00:00<?, ?it/s]/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=110 error=710 : device-side assert triggered
Traceback (most recent call last):
File "HFpretrain.py", line 771, in <module>
main()
File "HFpretrain.py", line 721, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "HFpretrain.py", line 325, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 1019, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2021, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:110
```
CPU error message
```
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/17 [00:00<?, ?it/s]Traceback (most recent call last):
File "HFpretrain.py", line 771, in <module>
main()
File "HFpretrain.py", line 721, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "HFpretrain.py", line 325, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 1019, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2021, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target -1 is out of bounds.
```
## Expected behavior
Should train as usual
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: google colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): n/a
- Using GPU in script?: both cpu and gpu Tesla T4
- Using distributed or parallel set-up in script?: o
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2737/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2736/comments | https://api.github.com/repos/huggingface/transformers/issues/2736/events | https://github.com/huggingface/transformers/pull/2736 | 559,958,435 | MDExOlB1bGxSZXF1ZXN0MzcxMDMzNjkx | 2,736 | TensorFlow XLM doesn't accept NumPy arrays for the attention mask | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=h1) Report\n> Merging [#2736](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c67196b83a824df577742d32d38e9121d8a9285?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2736 +/- ##\n==========================================\n+ Coverage 74.09% 74.09% +<.01% \n==========================================\n Files 93 93 \n Lines 15249 15251 +2 \n==========================================\n+ Hits 11298 11300 +2 \n Misses 3951 3951\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2736/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.47% <100%> (+0.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=footer). Last update [9c67196...3c9a47e](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, any update on this PR?",
"After discussing it with @thomwolf, it seems I was mistaken when believing that our TensorFlow models should accept numpy inputs. They should be converted to TensorFlow inputs. We should update the documentation to reflect this. Closing this PR as unrelated to the doc changes."
] | 1,580 | 1,651 | 1,587 | MEMBER | null | Convert NumPy attention mask to a TensorFlow tensor so that the mask creation doesn't crash
closes #2729 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2736",
"html_url": "https://github.com/huggingface/transformers/pull/2736",
"diff_url": "https://github.com/huggingface/transformers/pull/2736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2736.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2735/comments | https://api.github.com/repos/huggingface/transformers/issues/2735/events | https://github.com/huggingface/transformers/pull/2735 | 559,916,187 | MDExOlB1bGxSZXF1ZXN0MzcwOTk4OTY0 | 2,735 | test_attention_weights cleanup | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=h1) Report\n> Merging [#2735](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/86a0bb6e2117ad98141d92b700964aa0e73f8f49?src=pr&el=desc) will **decrease** coverage by `0.27%`.\n> The diff coverage is `5.4%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2735 +/- ##\n==========================================\n- Coverage 74.09% 73.82% -0.28% \n==========================================\n Files 93 93 \n Lines 15248 15249 +1 \n==========================================\n- Hits 11298 11257 -41 \n- Misses 3950 3992 +42\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `83.33% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.85% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.41% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `65.25% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.32% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `74.78% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `79.14% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=footer). Last update [86a0bb6...ce4241a](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks @sshleifer!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | No logic changes, just uses getattr to make code more readable. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2735/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2735",
"html_url": "https://github.com/huggingface/transformers/pull/2735",
"diff_url": "https://github.com/huggingface/transformers/pull/2735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2735.patch",
"merged_at": 1580852333000
} |
https://api.github.com/repos/huggingface/transformers/issues/2734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2734/comments | https://api.github.com/repos/huggingface/transformers/issues/2734/events | https://github.com/huggingface/transformers/pull/2734 | 559,895,316 | MDExOlB1bGxSZXF1ZXN0MzcwOTgyMDM4 | 2,734 | pass langs parameter to certain XLM models | {
"login": "yuvalpinter",
"id": 6660928,
"node_id": "MDQ6VXNlcjY2NjA5Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6660928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalpinter",
"html_url": "https://github.com/yuvalpinter",
"followers_url": "https://api.github.com/users/yuvalpinter/followers",
"following_url": "https://api.github.com/users/yuvalpinter/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalpinter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalpinter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalpinter/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalpinter/orgs",
"repos_url": "https://api.github.com/users/yuvalpinter/repos",
"events_url": "https://api.github.com/users/yuvalpinter/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalpinter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This seems to be failing a line length check, but my lines are not the longest in the file -- let me know if I should edit (the whole file) to conform.",
"Hi, thanks for opening this pull request! For the code quality to pass, you can check what's wrong with `make quality` at the root of the repo, and fix the black/isort issues with `make style`. Do you mind running the latter command and pushing your changes?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=h1) Report\n> Merging [#2734](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c67196b83a824df577742d32d38e9121d8a9285?src=pr&el=desc) will **decrease** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2734 +/- ##\n=========================================\n- Coverage 74.09% 73% -1.09% \n=========================================\n Files 93 93 \n Lines 15249 15249 \n=========================================\n- Hits 11298 11133 -165 \n- Misses 3951 4116 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `55.39% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.94% <0%> (-2.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.06% <0%> (-1.33%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=footer). Last update [9c67196...6070974](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for introducing me to new code check tools! Looks like we're good?",
"Great, thank you for doing the changes!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | Adding an argument that specifies the language the SQuAD dataset is in so language-sensitive XLMs (e.g. `xlm-mlm-tlm-xnli15-1024`) don't default to language `0`.
Allows resolution of issue #1799 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2734",
"html_url": "https://github.com/huggingface/transformers/pull/2734",
"diff_url": "https://github.com/huggingface/transformers/pull/2734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2734.patch",
"merged_at": 1580854363000
} |
https://api.github.com/repos/huggingface/transformers/issues/2733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2733/comments | https://api.github.com/repos/huggingface/transformers/issues/2733/events | https://github.com/huggingface/transformers/issues/2733 | 559,868,745 | MDU6SXNzdWU1NTk4Njg3NDU= | 2,733 | Save model wrapped in Keras | {
"login": "aollagnier",
"id": 47218241,
"node_id": "MDQ6VXNlcjQ3MjE4MjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/47218241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aollagnier",
"html_url": "https://github.com/aollagnier",
"followers_url": "https://api.github.com/users/aollagnier/followers",
"following_url": "https://api.github.com/users/aollagnier/following{/other_user}",
"gists_url": "https://api.github.com/users/aollagnier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aollagnier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aollagnier/subscriptions",
"organizations_url": "https://api.github.com/users/aollagnier/orgs",
"repos_url": "https://api.github.com/users/aollagnier/repos",
"events_url": "https://api.github.com/users/aollagnier/events{/privacy}",
"received_events_url": "https://api.github.com/users/aollagnier/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Same problem.",
"On which version are you running? Is it possible that [this fix](https://github.com/huggingface/transformers/pull/3103) fixed your issue? Can you try installing from master to check?",
"This doesn't look like the same thing I was fixing in #3103 so I doubt that that helped.",
"In particular, from `Network` docstring:\r\n\r\n```\r\n Two types of `Networks` exist: Graph Networks and Subclass Networks. Graph\r\n networks are used in the Keras Functional and Sequential APIs. Subclassed\r\n networks are used when a user subclasses the `Model` class. In general,\r\n more Keras features are supported with Graph Networks than with Subclassed\r\n Networks, specifically:\r\n\r\n - Model cloning (`keras.models.clone`)\r\n - Serialization (`model.get_config()/from_config`, `model.to_json()/to_yaml()`\r\n - Whole-model saving (`model.save()`)\r\n```\r\n\r\nBased on the traceback, apparently the model is a subclass model, so it needs to override `get_config` in order to support serialization. (The fix in #3103 is for a problem with using `TF*MainLayer` classes within a Keras model, so it doesn't address this.)",
"@gthb so is there any way to save the models wrapped in keras?",
"> @gthb so is there any way to save the models wrapped in keras?\r\n\r\nI'm sure there's _some_ way, just a question of how much custom work you have to do (probably some, given the above quote).\r\n\r\nBut are you sure you need to be using `TFBertModel` and not `TFBertMainLayer`, for your hidden layer? `TFBertModel` is literally just this (plus docstrings):\r\n\r\n```python\r\nclass TFBertModel(TFBertPreTrainedModel):\r\n def __init__(self, config, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n self.bert = TFBertMainLayer(config, name=\"bert\")\r\n\r\n def call(self, inputs, **kwargs):\r\n outputs = self.bert(inputs, **kwargs)\r\n return outputs\r\n```\r\n\r\n... so unless you need something in particular from `TFBertModel`'s superclasses, maybe using `TFBertMainLayer` directly would simplify things for you?",
"Thanks @gthb for your reply. I've updated my colab and now it works after I changed the following line:\r\n\r\n`model=TFBertModel.from_pretrained('bert-base-cased', config=config)`\r\n \r\nto:\r\n`model=TFBertMainLayer(config=config)`\r\n\r\nhowever I can't call the function from_pretrained. Is the class implicitly set by providing the config options from BERTConfig ?\r\n\r\nAnother point, I am facing a problem during the training of the model when it wraps in keras. \r\nUsing:\r\n`embedding = model([word_inputs, mask_inputs, seg_inputs])[0]`\r\nI get:\r\n`tensorflow:Gradients do not exist for variables ['tf_bert_main_layer/pooler/dense/kernel:0', 'tf_bert_main_layer/pooler/dense/bias:0'] when minimizing the loss.`\r\n\r\nI would like to use layers from transformers combined with a CNN (require 3D tensors as input) but in order to keep weights learned by the model I tried the pooler output (which provides 2D tensors): `model([word_inputs, mask_inputs, seg_inputs])[1]`\r\nbut it doesn't fit with CNN:\r\n`ValueError: Input 0 of layer input is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 768]`\r\n\r\nDo you have an idea how I should reshape it to fit with a conv1D layer ?\r\nThe error can be reproduce from my colab : https://colab.research.google.com/drive/18HYwffkXCylPqeA-8raL82vfwOjb-aLP",
"> I can't call the function from_pretrained. Is the class implicitly set by providing the config options from BERTConfig ?\r\n\r\nI'm guessing you mean that `TFBertMainLayer` does not have a `from_pretrained` method. Yep, but `BertConfig` does, so this works:\r\n\r\n```\r\nfrom transformers import BertConfig, TFBertMainLayer\r\nconfig_name = \"bert-base-uncased\" # for instance\r\nconfig = BertConfig.from_pretrained(config_name)\r\nmain_layer = TFBertMainLayer(config)\r\n```\r\n\r\n> Do you have an idea how I should reshape it to fit with a conv1D layer ?\r\n\r\nIsn't your Conv1D layer intended to convolve over the token sequence? The pooled output produces a single vector representing the whole sequence, not separate vectors for each token of the sequence. So you are probably mistaken in trying to use the pooled output (or I'm not understanding your intent).",
"Yes you've right I've misunderstood the nature of the pooler output (probably I've been misleaded by these related topics:[#2256](https://github.com/huggingface/transformers/issues/2256) and [#1727](https://github.com/huggingface/transformers/issues/1727)). So when I am using the last_hidden_state I am getting this warning:\r\n`\r\ntensorflow:Gradients do not exist for variables ['tf_bert_main_layer/pooler/dense/kernel:0', 'tf_bert_main_layer/pooler/dense/bias:0'] when minimizing the loss.`\r\n\r\nbut the model seems train however, when I load it I am getting:\r\n ```\r\n File \"/home/X/\", line 69, in train\r\n loaded_model = tf.keras.models.load_model(dirModel+self.options.t+'cnn.h5')\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py\", line 146, in load_model\r\n return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py\", line 193, in load_model_from_hdf5\r\n model._make_train_function()\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py\", line 2057, in _make_train_function\r\n params=self._collected_trainable_weights, loss=self.total_loss)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py\", line 503, in get_updates\r\n grads = self.get_gradients(loss, params)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py\", line 397, in get_gradients\r\n \"K.argmax, K.round, K.eval.\".format(param))\r\nValueError: Variable <tf.Variable 'tf_bert_main_layer_1/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval. \r\n\r\n```\r\nHere, the model used:\r\n```\r\n # Define inputs\r\n word_inputs = tf.keras.layers.Input(shape=(max_seq_length,), name='word_inputs', dtype='int32')\r\n mask_inputs = tf.keras.layers.Input(shape=(max_seq_length,), name='mask_inputs', dtype='int32')\r\n seg_inputs = tf.keras.layers.Input(shape=(max_seq_length,), name='seg_inputs', dtype='int32')\r\n\r\n # Call BERT model\r\n config_name = \"bert-base-uncased\" # for instance\r\n config = BertConfig.from_pretrained(config_name)\r\n main_layer = TFBertMainLayer(config)\r\n embedding = model([word_inputs, mask_inputs, seg_inputs])[0]\r\n\r\n conv=tf.keras.layers.Conv1D(128, kernel_size=5, activation='relu', name=\"input\")(embedding)\r\n pooling = tf.keras.layers.MaxPooling1D()(conv)\r\n lstm = tf.keras.layers.LSTM(128)(pooling)\r\n dense = tf.keras.layers.Dense(64, activation='relu')(lstm)\r\n\r\n # Final output \r\n outputs = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(dense)\r\n\r\n # Compile model\r\n model = tf.keras.Model(inputs=[word_inputs, mask_inputs, seg_inputs], outputs=outputs)\r\n model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])\r\n\r\n model.save('cnn.h5')\r\n loaded_model = tf.keras.models.load_model('cnn.h5')\r\n```\r\n\r\nSo what's I am doing wrong ?",
"@gthb \r\n> ... so unless you need something in particular from TFBertModel's superclasses, maybe using TFBertMainLayer directly would simplify things for you?\r\n\r\nSimply initializing `TFBertMainLayer` as\r\n```\r\n main_layer = TFBertMainLayer(config)\r\n```\r\nwon't load pretrained parameters as opposed to `TFBertModel.from_pretrained(...)`, right?\r\n",
"> won't load pretrained parameters as opposed to TFBertModel.from_pretrained(...), right?\r\n\r\nOops, yes, there's that little thing! π You can load the weights e.g. like this:\r\n\r\n```python\r\nbert_weights_file = TFBertPreTrainedModel.pretrained_model_archive_map[config_name]\r\nbert_weights_file = cached_path(bert_weights_file)\r\nmodel.load_weights(bert_weights_file, by_name=True)\r\n```",
"> > won't load pretrained parameters as opposed to TFBertModel.from_pretrained(...), right?\r\n> \r\n> Oops, yes, there's that little thing! You can load the weights e.g. like this:\r\n> \r\n> ```python\r\n> bert_weights_file = TFBertPreTrainedModel.pretrained_model_archive_map[config_name]\r\n> bert_weights_file = cached_path(bert_weights_file)\r\n> model.load_weights(bert_weights_file, by_name=True)\r\n> ```\r\n\r\nI'm getting this error, using transformers 2.11.0 version :\r\n```python\r\nAttributeError: type object 'TFBertPreTrainedModel' has no attribute 'pretrained_model_archive_map'\r\n```\r\nI'm using this syntax in my code : \r\n```python\r\nconfig = BertConfig.from_pretrained(config_name)\r\nbert_weights_file = TFBertPreTrainedModel.pretrained_model_archive_map[config_name]\r\n```",
"@PoriNiki yeah, from a quick `git log -S pretrained_model_archive_map` that attribute went away in https://github.com/huggingface/transformers/pull/4636 βKill model archive mapsβ β merged to master in https://github.com/huggingface/transformers/commit/d4c2cb402d6674211726fd5f4803d1090664e438 and first released in v2.11.0.\r\n\r\nBy staring at `TFPreTrainedModel.from_pretrained` a bit, the right way ought to be something like:\r\n```\r\nfrom transformers.file_utils import hf_bucket_url, TF2_WEIGHTS_NAME\r\nbert_weights_file_url = hf_bucket_url(config_name, filename=TF2_WEIGHTS_NAME)\r\nbert_weights_file = cached_path(bert_weights_file_url)\r\n```\r\n(not tested)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I still have this issue. Can't save my model, only saving weight",
"For other people (@ch-hristov) still having trouble with this, I wrote up an explanation and workarounds on stackoverflow: https://stackoverflow.com/questions/62482511/tfbertmainlayer-gets-less-accuracy-compared-to-tfbertmodel/64000378#64000378\r\nIt seems like it would be useful to smooth out this workflow, as many people using keras will run into this issue when they try to save their model. @gthb What do you think about adding something like `from_pretrained` to `MainLayer`, and pulling out the logic from `TFPreTrainedModel.from_pretrained` to support both? ",
"Sounds good, but I have just switched jobs and am not using transformers, don't really have the cycles to help, sorry! ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, \r\n\r\nAlso encountering this issue, couldn't make the the solution by @dmlicht work yet.\r\nCan anyone provide another feedback on that? \r\n\r\nAlso, will this issue be addressed by the HF team? "
] | 1,580 | 1,613 | 1,606 | NONE | null | Hi all,
Sorry for my naive question but I am trying to save my keras model (<class 'tensorflow.python.keras.engine.training.Model'>) in which I use TFBertModel() function as an hidden layer. To do that I use the save() function provided by the tf.keras package.
But I got this error:
```python
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-13-3b315f7219da> in <module>()
----> 1 model.save('model_weights.h5')
8 frames
/tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/network.py in get_config(self)
915 def get_config(self):
916 if not self._is_graph_network:
--> 917 raise NotImplementedError
918 return copy.deepcopy(get_network_config(self))
919
NotImplementedError:
```
The error can be reproduce from my colab : https://colab.research.google.com/drive/18HYwffkXCylPqeA-8raL82vfwOjb-aLP
And another question is how should I call this model for prediction ?
Thx for your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2733/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2732/comments | https://api.github.com/repos/huggingface/transformers/issues/2732/events | https://github.com/huggingface/transformers/issues/2732 | 559,813,125 | MDU6SXNzdWU1NTk4MTMxMjU= | 2,732 | Error for run_lm_finetuning.py (CUDA error: device-side assert triggered) | {
"login": "gjgjgjik",
"id": 42555757,
"node_id": "MDQ6VXNlcjQyNTU1NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/42555757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gjgjgjik",
"html_url": "https://github.com/gjgjgjik",
"followers_url": "https://api.github.com/users/gjgjgjik/followers",
"following_url": "https://api.github.com/users/gjgjgjik/following{/other_user}",
"gists_url": "https://api.github.com/users/gjgjgjik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gjgjgjik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjgjgjik/subscriptions",
"organizations_url": "https://api.github.com/users/gjgjgjik/orgs",
"repos_url": "https://api.github.com/users/gjgjgjik/repos",
"events_url": "https://api.github.com/users/gjgjgjik/events{/privacy}",
"received_events_url": "https://api.github.com/users/gjgjgjik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for your report. As discussed in #2719, 3bf5417 should have fixed it. Please let me know if it fixes your issue.",
"Yes, if I change `labels[~masked_indices] = -1` to `labels[~masked_indices] = -100`, then it works fine for both lm (GPT) and mlm (BERT-like).\r\nBut I'm worrying about 3bf5417 because I think these changes were made to fix #2718 which is _Masked indices should have -1 and not -100_.\r\n\r\n\r\n",
"The [PyTorch CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss) method has a default `ignore_index` set to -100. When no `ignore_index` is specified, it is correct to assume it is set to -100.\r\n\r\nNone of the CrossEntropy losses defined in DistilBERT have a different `ignore_index` specified, so it is correct to assume that `-100` should be used in all cases. This is the case for all models in the library since v2.4.0.",
"Then, I think this case is cleared. Thanks for your help :)"
] | 1,580 | 1,580 | 1,580 | NONE | null | ### Reporting Error
I updated transformers from 2.3.x to 2.4.1 today, and I'm facing a runtime error which is RuntimeError: CUDA error: device-side assert triggered.
I reviewed recent updates and found out the commits [Follow up 213] is causing the error.
Below are the changes from the commits:
- labels[~masked_indices] = -100 # We only compute loss on masked tokens
+ labels[~masked_indices] = -1 # We only compute loss on masked tokens
The changes are related to the calculation of masked language model loss, so the problem seems to occur when args.mlm is True. (If I change the value -1 to -100, it works fine)
Any suggestions?
### Sys Info
OS: Windows 10
Transformers: 2.4.1
PyTorch: 1.4.0
Tensorflow: 2.1.0
### Full Stack Trace
C:\Users\USER\Anaconda3\python.exe C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py --output_dir=output --model_type=roberta --model_name_or_path=roberta-base --do_train --train_data_file=../data/wikitext-2/wiki.train.raw --do_eval --eval_data_file=../data/wikitext-2/wiki.test.raw --evaluate_during_training --mlm --per_gpu_train_batch_size=1 --per_gpu_eval_batch_size=1
2020-02-04 10:46:01.194260: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
02/04/2020 10:46:05 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
02/04/2020 10:46:05 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-config.json from cache at C:\Users\USER\.cache\torch\transformers\e1a2a406b5a05063c31f4dfdee7608986ba7c6393f7f79db5e69dcd197208534.a7ab0e5de2d8321d6d6a15b199110f2c99be72976b7d151423cb8d8c261a13b6
02/04/2020 10:46:05 - INFO - transformers.configuration_utils - Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"vocab_size": 50265
}
02/04/2020 10:46:05 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at C:\Users\USER\.cache\torch\transformers\d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b
02/04/2020 10:46:05 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at C:\Users\USER\.cache\torch\transformers\b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
02/04/2020 10:46:05 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin from cache at C:\Users\USER\.cache\torch\transformers\228756ed15b6d200d7cb45aaef08c087e2706f54cb912863d2efe07c89584eb7.49b88ba7ec2c26a7558dda98ca3884c3b80fa31cf43a1b1f23aef3ff81ba344e
02/04/2020 10:46:10 - INFO - transformers.modeling_utils - Weights of RobertaForMaskedLM not initialized from pretrained model: ['lm_head.decoder.bias']
02/04/2020 10:46:12 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='../data/wikitext-2/wiki.test.raw', evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='roberta-base', model_type='roberta', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=1, per_gpu_train_batch_size=1, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='../data/wikitext-2/wiki.train.raw', warmup_steps=0, weight_decay=0.0)
02/04/2020 10:46:12 - INFO - __main__ - Loading features from cached file ../data/wikitext-2\roberta_cached_lm_510_wiki.train.raw
02/04/2020 10:46:12 - INFO - __main__ - ***** Running training *****
02/04/2020 10:46:12 - INFO - __main__ - Num examples = 4740
02/04/2020 10:46:12 - INFO - __main__ - Num Epochs = 1
02/04/2020 10:46:12 - INFO - __main__ - Instantaneous batch size per GPU = 1
02/04/2020 10:46:12 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 1
02/04/2020 10:46:12 - INFO - __main__ - Gradient Accumulation steps = 1
02/04/2020 10:46:12 - INFO - __main__ - Total optimization steps = 4740
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/4740 [00:00<?, ?it/s]C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py", line 790, in <module>
main()
File "C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py", line 740, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py", line 356, in train
loss.backward()
File "C:\Users\USER\Anaconda3\lib\site-packages\torch\tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\USER\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/4740 [00:00<?, ?it/s]
Process finished with exit code 1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2732/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2731/comments | https://api.github.com/repos/huggingface/transformers/issues/2731/events | https://github.com/huggingface/transformers/issues/2731 | 559,795,559 | MDU6SXNzdWU1NTk3OTU1NTk= | 2,731 | Masked LM and TFBertForSequenceClassification | {
"login": "tomerwul",
"id": 33780461,
"node_id": "MDQ6VXNlcjMzNzgwNDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/33780461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomerwul",
"html_url": "https://github.com/tomerwul",
"followers_url": "https://api.github.com/users/tomerwul/followers",
"following_url": "https://api.github.com/users/tomerwul/following{/other_user}",
"gists_url": "https://api.github.com/users/tomerwul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomerwul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomerwul/subscriptions",
"organizations_url": "https://api.github.com/users/tomerwul/orgs",
"repos_url": "https://api.github.com/users/tomerwul/repos",
"events_url": "https://api.github.com/users/tomerwul/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomerwul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | Hello,
Is it correct to say that fine-tuning a TFBertForSequenceClassification model is the same as fine-tuning BERT's mlm and in additional a classification layer at the same time?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2731/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2730/comments | https://api.github.com/repos/huggingface/transformers/issues/2730/events | https://github.com/huggingface/transformers/issues/2730 | 559,789,093 | MDU6SXNzdWU1NTk3ODkwOTM= | 2,730 | QuickStart code error | {
"login": "richwiss",
"id": 6644628,
"node_id": "MDQ6VXNlcjY2NDQ2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6644628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richwiss",
"html_url": "https://github.com/richwiss",
"followers_url": "https://api.github.com/users/richwiss/followers",
"following_url": "https://api.github.com/users/richwiss/following{/other_user}",
"gists_url": "https://api.github.com/users/richwiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richwiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richwiss/subscriptions",
"organizations_url": "https://api.github.com/users/richwiss/orgs",
"repos_url": "https://api.github.com/users/richwiss/repos",
"events_url": "https://api.github.com/users/richwiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/richwiss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Indeed this was an error, it should have been fixed with https://github.com/huggingface/transformers/commit/90ab15cb7a8fcf8bf58c05453ddf1aa6a4fa00c1.\r\n\r\nCould you try installing from source:\r\n\r\n```py\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nand let me know if it fixes your issue?",
"Install was successful.\r\n\r\nBut now I get the following:\r\n\r\n> Traceback (most recent call last):\r\n> File \"model2model.py\", line 82, in <module>\r\n> model = Model2Model.from_pretrained('fine-tuned-weights')\r\n> File \"/path/venvs/nlp/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py\", line 323, in from_pretrained\r\n> raise ValueError(\"Only the Bert model is currently supported.\")\r\n> ValueError: Only the Bert model is currently supported.\r\n\r\nTraded on error for another.\r\n",
"```py\r\nmodel = Model2Model.from_pretrained('fine-tuned-weights')\r\n```\r\n\r\nDo you have a folder called `fine-tuned-weights` in your directory?",
"To wrap this up: Just to confirm, there is no existing 'fine-tuned-weights' pretrained model.\r\n\r\n'fine-tuned-weights' is just a name for a hypothetical pretrained model.",
"Thank you, this commit works fine and fix the issue.",
"I'm still a bit confused going along with the Quickstart guide and trying to get a fine-tuned Model2Model to work. \r\nFirst of all, as a minor note, the suggested line in the guide\r\n `model = Model2Model.from_pretrained('fine-tuned-weights')`\r\nwon't work _even if a folder with that name exists_, as `from_pretrained` actually checks if this model path or name contains the string \"bert\" (among other things, see [here](https://github.com/huggingface/transformers/blob/e693cd1e877aa191d3317faed33e87d1558c9406/src/transformers/modeling_encoder_decoder.py#L282)). I understand that this is more of a placeholder name than anything else, but it might still be confusing. \r\n\r\nThen, let's assume I saved a fine-tuned Model2Model instance via `model.save_pretrained(PATH)` (where this PATH now contains the string \"bert\"). The suggested loading of this via `from_pretrained`will still fail: A saved Model2Model is actually split into encoder and decoder, so simply using the top directory containing both for loading will obviously fail. Thus, I only have the option of either loading the encoder _or_ decoder model, which will then, in the newly loaded Model2Model instance, be used as _both the encoder and decoder_, as this is how Model2Model is loaded: a single (BERT-)model used as encoder and decoder. But that can't be correct for _fine-tuned_ versions of this model, can it? Or am I just missing something obvious here? \r\n\r\n\r\n\r\n",
"Hi @redfarg, thanks for you comment. This is misleading indeed. We're in the process of adding BART to the library (@sshleifer), improving the experience with encoder-decoder architectures/Model2Model is part of the roadmap."
] | 1,580 | 1,582 | 1,582 | NONE | null | In the model2model quickstart example, I'm getting an error here:
`outputs = model(question_tensor, answer_tensor, decoder_lm_labels=labels_tensor)`
With the following message:
`RuntimeError: The size of tensor a (8) must match the size of tensor b (768) at non-singleton dimension 3`
Any ideas? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2729/comments | https://api.github.com/repos/huggingface/transformers/issues/2729/events | https://github.com/huggingface/transformers/issues/2729 | 559,768,165 | MDU6SXNzdWU1NTk3NjgxNjU= | 2,729 | Attention Mask for TFXLM Model doesn't work | {
"login": "dakshvar22",
"id": 8708249,
"node_id": "MDQ6VXNlcjg3MDgyNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8708249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakshvar22",
"html_url": "https://github.com/dakshvar22",
"followers_url": "https://api.github.com/users/dakshvar22/followers",
"following_url": "https://api.github.com/users/dakshvar22/following{/other_user}",
"gists_url": "https://api.github.com/users/dakshvar22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakshvar22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakshvar22/subscriptions",
"organizations_url": "https://api.github.com/users/dakshvar22/orgs",
"repos_url": "https://api.github.com/users/dakshvar22/repos",
"events_url": "https://api.github.com/users/dakshvar22/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakshvar22/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! There seems to be an error in the current implementation where it doesn't accept NumPy arrays, only TensorFlow arrays. I'm working on it in [the branch fix-tf-xlm](https://github.com/huggingface/transformers/tree/fix-tf-xlm). In the meantime, you can use a tf.Tensor instead and it should work fine.\r\n\r\nPlease be aware that your attention mask should be defined as `np.ones_like(np.array([input_ids]))` instead of your current `np.ones_like(np.array(input_ids))` or else it'll be a dimension short.\r\n\r\nThe following code is your code modified to run:\r\n\r\n```py\r\nfrom transformers import *\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ntokenizer = XLMTokenizer.from_pretrained('xlm-mlm-enfr-1024')\r\nmodel = TFXLMModel.from_pretrained('xlm-mlm-enfr-1024')\r\ntext = \"Good evening.\"\r\ninput_ids = tokenizer.encode(text, add_special_tokens=True)\r\nlast_hidden_states = model(np.array([input_ids]), attention_mask=tf.constant(np.ones_like(np.array([input_ids]))))\r\n\r\n```",
"Hi @LysandreJik When can we expect your fix to be merged and released in an official release?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, is this bug now fixed? \r\nThanks!"
] | 1,580 | 1,655 | 1,589 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): XLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import *
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-enfr-1024')
model = TFXLMModel.from_pretrained('xlm-mlm-enfr-1024')
text = "Good evening."
input_ids = tokenizer.encode(text, add_special_tokens=True)
last_hidden_states = model(np.array([input_ids]), attention_mask=np.ones_like(np.array(input_ids)))
```
Error output:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_xlm.py", line 589, in call
outputs = self.transformer(inputs, **kwargs)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_xlm.py", line 348, in call
mask, attn_mask = get_masks(slen, lengths, self.causal, padding_mask=attention_mask)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_xlm.py", line 88, in get_masks
tf.debugging.assert_equal(shape_list(mask), [bs, slen])
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 546, in shape_list
static = x.shape.as_list()
AttributeError: 'tuple' object has no attribute 'as_list'
```
Works fine if I attention mask is removed
## Expected behavior
`last_hidden_states` is a tuple of type `tf.Tensor`
## Environment info
- `transformers` version: 2.3.0
- Platform: OSX
- Python version: 3.6.5
- Tensorflow version (GPU?): 2.1.0(CPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2729/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2728/comments | https://api.github.com/repos/huggingface/transformers/issues/2728/events | https://github.com/huggingface/transformers/issues/2728 | 559,667,477 | MDU6SXNzdWU1NTk2Njc0Nzc= | 2,728 | RuntimeError: expected dtype Float but got dtype Long - run_lm_finetuning.py | {
"login": "paulthemagno",
"id": 38130299,
"node_id": "MDQ6VXNlcjM4MTMwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38130299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulthemagno",
"html_url": "https://github.com/paulthemagno",
"followers_url": "https://api.github.com/users/paulthemagno/followers",
"following_url": "https://api.github.com/users/paulthemagno/following{/other_user}",
"gists_url": "https://api.github.com/users/paulthemagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulthemagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulthemagno/subscriptions",
"organizations_url": "https://api.github.com/users/paulthemagno/orgs",
"repos_url": "https://api.github.com/users/paulthemagno/repos",
"events_url": "https://api.github.com/users/paulthemagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulthemagno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@paulthemagno thanks for creating this. My environment is exactly the same, except I am running the lm fine tuner on a Python 3.7.3 environment. @LysandreJik asked for more information. A couple of more inputs although I am not sure if this is going to help. This problem is not happening when I subset my dataset and run the code. Neither on training nor evaluation. So this tells me there is a data problem somewhere.\r\n\r\nSo I caught the runtime error and output the results of those data objects for my dataset on line 120-122 that is posted above. Only unusual thing I see is that one of the examples for this batch where the code errors out is all zeros for the input tensor. \r\n```\r\n\r\ntensor([[ 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0.],\r\n[ 2004., 2019., 5587., 10497., 2819., 2000., 1996., 2194., 2015.,\r\n 2128., 8569., 28200., 10896., 1010., 3531., 2421., 1037., 2862.,\r\n 1997., 2169., 8875., 2073., 1996., 2194., 103., 2015., 2007.,\r\n 1996., 103., 1997., 1996., 2060., 4243., 2000., 103., 8946.,\r\n 3388., 1012., 0., 0., 0., 0., 0., 0., 0.,\r\n 0., 0., 0., 0., 0., 0.],\r\n...\r\n```\r\nMy data is split as 1) each sentence in one line, 2) and the documents are split by an empty line as recommended in the documentation. I looked at that particular document but I do not see anything unusual but maybe some of this provide some clues. My hunch was that there is a non-ascii character which I am cleaning up, or maybe a dash or underscore repeating many many times for that particular example but if my eyes are not failing me, I can't find that in the dataset for that batch.\r\n\r\nThanks you all... \r\n\r\n",
"Hi, I'm looking into this problem right now, thank you for providing so much helpful information!\r\nI could set up an experiment where I would get the same error, and I patched it with a cast as @paulthemagno recommended.\r\n\r\nIt is visible in 1ebfeb7. This should hopefully patch your issue, but as I don't have your particular dataset I can't verify first hand. Do you mind letting me know if it fixes it?",
"> Hi, I'm looking into this problem right now, thank you for providing so much helpful information!\r\n> I could set up an experiment where I would get the same error, and I patched it with a cast as @paulthemagno recommended.\r\n> \r\n> It is visible in [9c67196](https://github.com/huggingface/transformers/commit/9c67196b83a824df577742d32d38e9121d8a9285). This should hopefully patch your issue, but as I don't have your particular dataset I can't verify first hand. Do you mind letting me know if it fixes it?\r\n\r\n\r\nThanks, that patch or code block does not reflect the change, maybe a typo on the commit hash? ",
"Indeed, sorry, edited.",
"@LysandreJik happy to confirm that it worked. I patched my own script with the added line for casting the inputs and it ran through the whole corpus 160K records and outputted 5.6 perplexity score. I am assuming it worked. Thank you very much... \r\n\r\nOzan",
"Fantastic! Thank you for checking!",
"> Fantastic! Thank you for checking!\r\n\r\nYou're welcome. I am glad I could help. By the way, out of topic, could you shed some light on why my input tensors are truncated to length = 51 as you can see in my original post above. I don't see where I set that to 51 nor a hard code somewhere. Here are my script arguments:\r\n\r\n```\r\npython run_lm_finetuning.py \\\r\n --train_data_file /path/to/data \\\r\n --eval_data_file /path/to/eval_file \\\r\n --output_dir /path/fine_tuned/bert_uncased_lm \\\r\n --mlm \\\r\n --do_train \\\r\n --do_eval \\\r\n --cache_dir /cache_dir \\\r\n --model_type bert \\\r\n --model_name_or_path bert-base-uncased \\\r\n --per_gpu_train_batch_size 16 \\\r\n --gradient_accumulation_steps 2 \\\r\n --per_gpu_eval_batch_size 16 \\\r\n --block_size 256 \\\r\n --eval_all_checkpoints \\\r\n --line_by_line \\\r\n --fp16\r\n```\r\nAs far as I understand Block size is the after tokenization, max seq length is that, where is 51 coming from? This might be a stupid question but I am just trying to avoid making a gross error and get a little bit more of an understanding of the code. \r\n\r\nOzan\r\n",
"That seems weird, indeed, but it's hard for me to debug without having more information about your dataset. Since you're using the `--line_by_line` flag, it should be building tensors according to the line returns in your dataset. Is it possible 51 is the maximum length of a sequence for that specific batch, so it pads up to 51 for the rest of the batch?",
"Yes, that must be it, I checked some random batches and the length for the input tensors varies from batch to batch. I apologize for sidetracking this thread. Seemed like while I had you and the data above, I would get a quick answer. thank you again. ",
"No worries, glad I could help.",
"> Hi, I'm looking into this problem right now, thank you for providing so much helpful information!\r\n> I could set up an experiment where I would get the same error, and I patched it with a cast as @paulthemagno recommended.\r\n> \r\n> It is visible in [1ebfeb7](https://github.com/huggingface/transformers/commit/1ebfeb79469d544a2bd817aa32c77e0514485ff9). This should hopefully patch your issue, but as I don't have your particular dataset I can't verify first hand. Do you mind letting me know if it fixes it?\r\n\r\nThanks to all. I had already launched the code before you wrote this message, with the additional line `inputs = inputs.type(dtype=torch.long)` without the _clone_ method. It has worked, but I think it is better to restart from 0. Also beacuse re-launching the code from the last saved checkpoint (before the crash), I have noticed that the first new checkpoint had a peek on the perplexity and after that it return to decrease, so better restarting.\r\n\r\nAnyway the code worked with my change, so I think also with yours, which is more correct :)"
] | 1,580 | 1,580 | 1,580 | NONE | null | # π Bug
## Information
I'm using my Camembert-based language model on Italian language (built from scratch).
I'm trying to use [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) to fine-tune my language model on a dataset.
@julien-c suggested my to add `--line_by_line` to my launch script, beacuse without that flag, the program blocked on the tokenization of the training set. That advide let the program work. But after some hours, the program crashes with a strange Runtime Error: in the assignment of 10 % of random words to masks at [line 218](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L218) in the _mask_tokens()_ function:
```python
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random] #it crashes here
```
The error was this:
`RuntimeError: expected dtype Float but got dtype Long
`
It's a strange error, beacuse it crashes after several minutes or hours after the launch. The time of the proper functioning of program seems random: sometimes the script completes 3/4 epochs and then it crashes, sometimes it crashes before the end of the first epoch.
## To reproduce
I launched this:
```bash
python3 run_lm_finetuning.py \
--train_data_file /path/to/train.txt \
--eval_data_file /path/to/eval.txt \
--output_dir /path/to/output \
--mlm \
--do_train \
--do_eval \
--model_type camembert \
--model_name_or_path /path/to/my/model \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 8 \
--overwrite_output_dir \
--overwrite_cache \
--max_steps 500000 \
--block_size 128 \
--save_steps 50000 \
--eval_all_checkpoints \
--line_by_line
```
I got this error in the middle of 6th epoch:
```
File "run_lm_finetuning.py", line 801, in <module>51:22, 4.94it/s]
main()
File "run_lm_finetuning.py", line 750, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 342, in train
inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)
File "run_lm_finetuning.py", line 222, in mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: expected dtype Float but got dtype Long
Epoch: 55%|ββββββ | 6/11 [20:26:33<17:02:07, 12265.60s/it]
Iteration: 69%|βββββββ | 33378/48603 [1:47:45<49:09, 5.16it/s]
```
I'm managing to run the code anyway, restarting the program using the flag `--model_name_or_path` and giving the last saved checkpoint rather then the original language model every time it crashes.
I printed `inputs[indices_random]` and `random_words[indices_random]` beacause are the two variables in the line in which the program crashes:
- The code crashes with this 2 variables:
```
inputs[indices_random] = tensor([1173.])
Random_words[indices_random] = tensor([4220])
Traceback (most recent call last):
File "run_lm_finetuning.py", line 797, in <module>
main()
File "run_lm_finetuning.py", line 747, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 349, in train
inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)
File "run_lm_finetuning.py", line 229, in mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: expected dtype Float but got dtype Long
Epoch: 60%|ββββββ | 3/5 [14:31:21<9:40:54, 17427.18s/it]
```
- while before the crash the code enters the _mask_tokens()_ function corretly and prints lines like these:
```
inputs[indices_random] = tensor([19807, 78, 51, 1204])
Random_words[indices_random] = tensor([14538, 15381, 30255, 3778])
```
In my opinion the only difference is that **tensor([1173.])** in the crash example, contains a not integer value (there is the '.' at the end of the number, while all the other times not. Maybe with a cast of `inputs[indices_random]` it would work.
## Environment info
- `transformers` version: 2.4.1
- Platform: Linux-4.4.0-108-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2728/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2727/comments | https://api.github.com/repos/huggingface/transformers/issues/2727/events | https://github.com/huggingface/transformers/issues/2727 | 559,652,475 | MDU6SXNzdWU1NTk2NTI0NzU= | 2,727 | XLM Roberta token_type_ids bug with batch_encode_plus | {
"login": "tamuhey",
"id": 24998666,
"node_id": "MDQ6VXNlcjI0OTk4NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamuhey",
"html_url": "https://github.com/tamuhey",
"followers_url": "https://api.github.com/users/tamuhey/followers",
"following_url": "https://api.github.com/users/tamuhey/following{/other_user}",
"gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions",
"organizations_url": "https://api.github.com/users/tamuhey/orgs",
"repos_url": "https://api.github.com/users/tamuhey/repos",
"events_url": "https://api.github.com/users/tamuhey/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamuhey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# Note\r\n\r\n- No error occurs for other models (e.g. `bert-base-cased`)\r\n- I think the configuration of `xlm-roberta-base` is incorrect:\r\n\r\n```\r\n>>> cfg = XLMRobertaConfig.from_pretrained(\"xlm-roberta-base\")\r\n>>> cfg.type_vocab_size\r\n1 # 2 is correct?\r\n```",
"> * I think the configuration of `xlm-roberta-base` is incorrect:\r\n> \r\n> \r\n> ```\r\n> >>> cfg = XLMRobertaConfig.from_pretrained(\"xlm-roberta-base\")\r\n> >>> cfg.type_vocab_size\r\n> 1 # 2 is correct?\r\n> ```\r\nNo the configuration is correct. The offical [XLM-RoBERTa](https://github.com/pytorch/fairseq/tree/master/examples/xlmr) doesn't have any token_type_ids:\r\n```\r\n...\r\n (decoder): RobertaEncoder(\r\n (sentence_encoder): TransformerSentenceEncoder(\r\n (embed_tokens): Embedding(250002, 1024, padding_idx=1)\r\n (embed_positions): LearnedPositionalEmbedding(514, 1024, padding_idx=1)\r\n (layers)\r\n...\r\n```\r\nThe problem here is that encode_plus produces model independent token_type_ids. I'm currently working on a fix (#2702). You can just replace the produced token_type_ids for now with:\r\n\r\n`x = {key:value for (key,value) in x.items() if key != 'token_type_ids'}`\r\n",
"Hi! This will work once #3198 is merged. Please note, however, that the following:\r\n\r\n```py\r\nx = tokenizer.batch_encode_plus(\r\n [\"foo\", \"bar bar bar\"], add_special_tokens=True, return_tensors=\"pt\"\r\n)\r\n```\r\n\r\nwill not work as your two sequences, \"foo\" and \"bar bar bar\", once tokenized, are not of equal length. To ensure this gets tokenized, you will need to pass `pad_to_max_length=True` to `batch_encode_plus`:\r\n\r\n```py\r\nx = tokenizer.batch_encode_plus(\r\n [\"foo\", \"bar bar bar\"], add_special_tokens=True, return_tensors=\"pt\", pad_to_max_length=True\r\n)\r\n```"
] | 1,580 | 1,584 | 1,584 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): XLM Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import transformers
name = "xlm-roberta-base"
tokenizer = XLMRobertaTokenizer.from_pretrained(name)
model = XLMRobertaModel.from_pretrained(name)
x = tokenizer.batch_encode_plus(
["foo", "bar bar bar"], add_special_tokens=True, return_tensors="pt"
)
model(**x)
```
<details><summary>Output</summary>
<p>
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-33-3743974223ad> in <module>
7 ["foo", "bar bar bar"], add_special_tokens=True, return_tensors="pt"
8 )
----> 9 model(**x)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
797
798 embedding_output = self.embeddings(
--> 799 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
800 )
801 encoder_outputs = self.encoder(
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
62
63 return super().forward(
---> 64 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
65 )
66
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
189 inputs_embeds = self.word_embeddings(input_ids)
190 position_embeddings = self.position_embeddings(position_ids)
--> 191 token_type_embeddings = self.token_type_embeddings(token_type_ids)
192
193 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index 1 out of table with 0 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
</p>
</details>
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No error
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: OS X
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (no GPU)
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2726/comments | https://api.github.com/repos/huggingface/transformers/issues/2726/events | https://github.com/huggingface/transformers/issues/2726 | 559,597,672 | MDU6SXNzdWU1NTk1OTc2NzI= | 2,726 | convert_tokens_to_ids(self, tokens)δΈηids.append(self.vocab[token])οΌKeyError | {
"login": "jiangjiaqi6",
"id": 33390819,
"node_id": "MDQ6VXNlcjMzMzkwODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/33390819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangjiaqi6",
"html_url": "https://github.com/jiangjiaqi6",
"followers_url": "https://api.github.com/users/jiangjiaqi6/followers",
"following_url": "https://api.github.com/users/jiangjiaqi6/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangjiaqi6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangjiaqi6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangjiaqi6/subscriptions",
"organizations_url": "https://api.github.com/users/jiangjiaqi6/orgs",
"repos_url": "https://api.github.com/users/jiangjiaqi6/repos",
"events_url": "https://api.github.com/users/jiangjiaqi6/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangjiaqi6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The token that you are trying to convert to ids doesn't exist. 'persuading' is a long word, so it's likely that it is not as such present in the vocab. Instead you'll have to tokenize it first into subword units.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | # β Questions & Help
## Details
<!-- Description of your issue -->
KeyError Traceback (most recent call last)
<ipython-input-20-dfb0a32a8e67> in <module>()
19 for mask_pos in mask_positions:
20 candidates = options[num]
---> 21 candidates_ids = tokenizer.convert_tokens_to_ids(candidates)
22 token_ids = tokenizer.convert_tokens_to_ids(tokenized_text)
23 tokens_tensor = torch.tensor([token_ids])
~/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py in convert_tokens_to_ids(self, tokens)
119 ids = []
120 for token in tokens:
--> 121 ids.append(self.vocab[token])
122 if len(ids) > self.max_len:
123 logger.warning(
KeyError: 'persuading'
How to solve KeyError problems?Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2726/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2725/comments | https://api.github.com/repos/huggingface/transformers/issues/2725/events | https://github.com/huggingface/transformers/issues/2725 | 559,441,283 | MDU6SXNzdWU1NTk0NDEyODM= | 2,725 | add TinyBERT? | {
"login": "Dicksonchin93",
"id": 28866718,
"node_id": "MDQ6VXNlcjI4ODY2NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/28866718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dicksonchin93",
"html_url": "https://github.com/Dicksonchin93",
"followers_url": "https://api.github.com/users/Dicksonchin93/followers",
"following_url": "https://api.github.com/users/Dicksonchin93/following{/other_user}",
"gists_url": "https://api.github.com/users/Dicksonchin93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dicksonchin93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dicksonchin93/subscriptions",
"organizations_url": "https://api.github.com/users/Dicksonchin93/orgs",
"repos_url": "https://api.github.com/users/Dicksonchin93/repos",
"events_url": "https://api.github.com/users/Dicksonchin93/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dicksonchin93/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"tinybert just a normal bert with smaller parameters. I am not sure whether huggingface team will create new object called `TinyBert`. I think you can simply contact `huawei-noah` first to get permission to upload tinybert using your personal account.",
"Or you could ask them if they would create an [org account](https://huggingface.co/organizations) and upload TinyBert there.\r\n\r\nI'll also ping them as it would be really great (cc @jacobrxz)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,590 | 1,590 | NONE | null | # π New model addition
## Model description
TinyBERT is a smaller version of the Base BERT model, it uses transformer distillation (a type of knowledge distillation) to transfer the plenty of knowledge encoded in a large βteacherβ BERT to a small βstudentβ TinyBERT. is empirically effective and achieves more than 96% the performance of teacher BERTBASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT is also significantly better than state-of-the-art baselines on BERT distillation, with only βΌ28% parameters and βΌ31% inference time of them. Here I have feature request to add the pretrained weights of TinyBERT after general learning from https://github.com/huawei-noah/Pretrained-Language-Model and model for both TF2.0 and pytorch. I think the transformer distillation method should be introduced too.
https://arxiv.org/pdf/1909.10351.pdf
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details)
https://github.com/huawei-noah/Pretrained-Language-Model, only pytorch is available in my knowledge at the moment
https://github.com/koursaros-ai/nboost
* [x] the model weights are available: (give details)
https://github.com/huawei-noah/Pretrained-Language-Model
* [x] who are the authors: (mention them, if possible by @gh-username)
https://github.com/huawei-noah/Pretrained-Language-Model @jacobrxz https://github.com/jacobrxz
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2725/reactions",
"total_count": 15,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 11,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2724/comments | https://api.github.com/repos/huggingface/transformers/issues/2724/events | https://github.com/huggingface/transformers/issues/2724 | 559,406,630 | MDU6SXNzdWU1NTk0MDY2MzA= | 2,724 | sequence labeling for sentences and not tokens | {
"login": "antgr",
"id": 2175768,
"node_id": "MDQ6VXNlcjIxNzU3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2175768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antgr",
"html_url": "https://github.com/antgr",
"followers_url": "https://api.github.com/users/antgr/followers",
"following_url": "https://api.github.com/users/antgr/following{/other_user}",
"gists_url": "https://api.github.com/users/antgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antgr/subscriptions",
"organizations_url": "https://api.github.com/users/antgr/orgs",
"repos_url": "https://api.github.com/users/antgr/repos",
"events_url": "https://api.github.com/users/antgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/antgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! You could leverage one of the `XXXForSequenceClassification` models for this. Their purpose is to classify sequences into a given number of labels. You would need to initialize a model from a pre-trained checkpoint:\r\n\r\n```py\r\nfrom transformers import BertForSequenceClassification\r\n\r\nmodel = BertForSequenceClassification.from_pretrained(\"bert-base-cased\")\r\n```\r\n\r\nThis instantiates the base transformer model, but doesn't instantiate the classifier layer on top, you would need to train that with a fine-tuning on your own specific task. ",
"Does the fact that I want to classify entire sentences and not words, makes any difference? And if yes what is this difference? Is there any example with this specific use case?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,587 | 1,587 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have sentences that belong to a paragraph. Each sentence has a label.
[s1,s2,s3,..], [l1,l2,l3,...]
I understand that I have to encode each sentence using an encoder e.g bert, and then use sequence labeling. Could you guide me on how I could do that, combining them?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/60048900/sequence-labeling-for-sentences-and-not-tokens | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2723/comments | https://api.github.com/repos/huggingface/transformers/issues/2723/events | https://github.com/huggingface/transformers/pull/2723 | 559,398,248 | MDExOlB1bGxSZXF1ZXN0MzcwNTc0ODI0 | 2,723 | Improved testing | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=h1) Report\n> Merging [#2723](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c1b23554f8bb5b5e1f6c80969acab764c755678?src=pr&el=desc) will **increase** coverage by `0.93%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2723 +/- ##\n==========================================\n+ Coverage 74.09% 75.03% +0.93% \n==========================================\n Files 93 93 \n Lines 15248 15248 \n==========================================\n+ Hits 11298 11441 +143 \n+ Misses 3950 3807 -143\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100% <0%> (+25%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (+30.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `83.82% <0%> (+55.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=footer). Last update [6c1b235...74b1cb3](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,651 | 1,580 | MEMBER | null | Adding some tests for some models that were not tested. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2723/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2723/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2723",
"html_url": "https://github.com/huggingface/transformers/pull/2723",
"diff_url": "https://github.com/huggingface/transformers/pull/2723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2723.patch",
"merged_at": 1580857536000
} |
https://api.github.com/repos/huggingface/transformers/issues/2722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2722/comments | https://api.github.com/repos/huggingface/transformers/issues/2722/events | https://github.com/huggingface/transformers/issues/2722 | 559,344,966 | MDU6SXNzdWU1NTkzNDQ5NjY= | 2,722 | Bert and Roberta models cannot be converted to TFLite | {
"login": "neonbjb",
"id": 833082,
"node_id": "MDQ6VXNlcjgzMzA4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/833082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neonbjb",
"html_url": "https://github.com/neonbjb",
"followers_url": "https://api.github.com/users/neonbjb/followers",
"following_url": "https://api.github.com/users/neonbjb/following{/other_user}",
"gists_url": "https://api.github.com/users/neonbjb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neonbjb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neonbjb/subscriptions",
"organizations_url": "https://api.github.com/users/neonbjb/orgs",
"repos_url": "https://api.github.com/users/neonbjb/repos",
"events_url": "https://api.github.com/users/neonbjb/events{/privacy}",
"received_events_url": "https://api.github.com/users/neonbjb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cc @Pierrci ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert / Roberta
Language I am using the model on (English, Chinese ...): N/A
The problem arises when using:
* [ ] the official example scripts: (give details below)
Sort of. Using the tflite conversion script provided here:
https://github.com/huggingface/tflite-android-transformers/blob/master/models_generation/distilbert.py
The tasks I am working on is: Converting models to tflite format
## To reproduce
Steps to reproduce the behavior:
I first tried the example script provided above to convert a distilbert model to tflite, and it worked fine. The GPT conversion also works great.
Next, I modified the above script to the following:
```
import tensorflow as tf
from transformers import TFRobertaModel
model = TFRobertaModel.from_pretrained('roberta-base')
input_spec = [tf.TensorSpec([1, 128], tf.int32), tf.TensorSpec([1, 128], tf.int32)]
model._set_inputs(input_spec, training=False)
print(model.inputs)
print(model.outputs)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For conversion with hybrid quantization:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.experimental_new_converter = True
tflite_model = converter.convert()
```
Note that the above can be replaced with "TFBertModel" and "bert-base-cased" with 3 input tensors with the same result below.
## Expected behavior
No errors, creates tflite model.
## Actual behavior
Error for both BERT and Roberta:
```
[<tf.Tensor 'input_1_11:0' shape=(None, 128) dtype=int32>, <tf.Tensor 'input_2_9:0' shape=(None, 128) dtype=int32>]
[<tf.Tensor 'tf_roberta_model_1/Identity:0' shape=(None, 128, 768) dtype=float32>, <tf.Tensor 'tf_roberta_model_1/Identity_1:0' shape=(None, 768) dtype=float32>]
---------------------------------------------------------------------------
ConverterError Traceback (most recent call last)
<ipython-input-15-1f63532e8b87> in <module>
26 converter.experimental_new_converter = True
27
---> 28 tflite_model = converter.convert()
29
30 open("distilbert-squad-384.tflite", "wb").write(tflite_model)
c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\python\lite.py in convert(self)
444 input_tensors=input_tensors,
445 output_tensors=output_tensors,
--> 446 **converter_kwargs)
447
448 if self._is_calibration_quantize():
c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\python\convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs)
447 input_data.SerializeToString(),
448 debug_info_str=debug_info_str,
--> 449 enable_mlir_converter=enable_mlir_converter)
450 return data
451
c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\python\convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
198 stdout = _try_convert_to_unicode(stdout)
199 stderr = _try_convert_to_unicode(stderr)
--> 200 raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
201 finally:
202 # Must manually cleanup files.
ConverterError: See console for info.
2020-02-03 14:16:20.869205: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2020-02-03 14:16:25.853657: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Cumsum
2020-02-03 14:16:25.854123: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.854715: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.855259: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.855869: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.856324: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.856863: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.857394: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.857914: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.858543: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.859107: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.859552: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.860084: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:26.060782: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1517 operators, 2651 arrays (0 quantized)
2020-02-03 14:16:26.149298: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1517 operators, 2651 arrays (0 quantized)
2020-02-03 14:16:26.149831: F tensorflow/lite/toco/graph_transformations/resolve_strided_slice_attributes.cc:95] Check failed: start_indices_size <= num_input_axes (4 vs. 2)StridedSlice op requires no more than 2 start indices
Fatal Python error: Aborted
Current thread 0x000013c4 (most recent call first):
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\toco\python\toco_from_protos.py", line 52 in execute
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\absl\app.py", line 250 in _run_main
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\absl\app.py", line 299 in run
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\python\platform\app.py", line 40 in run
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\toco\python\toco_from_protos.py", line 89 in main
File "C:\drive\projects\ml-notebooks\pycharm-venv\Scripts\toco_from_protos.exe\__main__.py", line 9 in <module>
File "C:\python\lib\runpy.py", line 85 in _run_code
File "C:\python\lib\runpy.py", line 193 in _run_module_as_main
```
## Environment info
- `transformers` version:
- Platform: Windows 10
- Python version: 3.6.8
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.0.0-dev20191002 (gpu=yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: N/A
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2721/comments | https://api.github.com/repos/huggingface/transformers/issues/2721/events | https://github.com/huggingface/transformers/issues/2721 | 559,158,183 | MDU6SXNzdWU1NTkxNTgxODM= | 2,721 | Is transformers ovewriting tokenizer? | {
"login": "diogocortiz",
"id": 1730916,
"node_id": "MDQ6VXNlcjE3MzA5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1730916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diogocortiz",
"html_url": "https://github.com/diogocortiz",
"followers_url": "https://api.github.com/users/diogocortiz/followers",
"following_url": "https://api.github.com/users/diogocortiz/following{/other_user}",
"gists_url": "https://api.github.com/users/diogocortiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diogocortiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diogocortiz/subscriptions",
"organizations_url": "https://api.github.com/users/diogocortiz/orgs",
"repos_url": "https://api.github.com/users/diogocortiz/repos",
"events_url": "https://api.github.com/users/diogocortiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/diogocortiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I believe your issue was solved with https://github.com/huggingface/tokenizers/issues/120",
"Sure. I will close this case."
] | 1,580 | 1,580 | 1,580 | NONE | null | Hello. I haven't been able to use tokenizer since friday.
It seems that if I install transformers via pip it overwrites tokenizer installation with a version that doesn't work.
If I get a new instance and do that:
`pip install transformers`
When, I do that:
`pip install tokenizers`
I got the following msg:
> Requirement` already satisfied: tokenizers in /usr/local/lib/python3.7/site-packages (0.0.11)
And if I tried to import, I got this error msg:
> ImportError: cannot import name 'BertWordPieceTokenizer' from 'tokenizers'
I was wondering if it is a problem related to the new Transformers you released last Friday.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2721/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2720/comments | https://api.github.com/repos/huggingface/transformers/issues/2720/events | https://github.com/huggingface/transformers/pull/2720 | 559,052,143 | MDExOlB1bGxSZXF1ZXN0MzcwMjkyNzMw | 2,720 | Add READMEs to Tensorflow versions of CamemBERT and XLM-RoBERTa | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=h1) Report\n> Merging [#2720](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ba147ecffa28e5a4f96eebd09dcd642117dedae?src=pr&el=desc) will **decrease** coverage by `0.26%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2720 +/- ##\n==========================================\n- Coverage 74.09% 73.82% -0.27% \n==========================================\n Files 93 93 \n Lines 15248 15248 \n==========================================\n- Hits 11298 11257 -41 \n- Misses 3950 3991 +41\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `52.94% <0%> (-21.57%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.79% <0%> (-3.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `84.87% <0%> (-0.82%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.3% <0%> (-0.52%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=footer). Last update [2ba147e...312b0d4](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @jplu! By the way (b/c I saw you uploaded a README to S3), \r\n- we might support pushing READMEs from the S3 bucket to the repo automatically.\r\n- we definitely will find a system for users to get merge rights on their model cards (via a GitHub bot maybe)",
"Yep, at first I intuitively thought that the method was the first bullet point you proposed, and then I finally saw that I had to do a PR.\r\n\r\nYour second bullet point, I think, might be feasible with the Github Actions."
] | 1,580 | 1,600 | 1,580 | CONTRIBUTOR | null | Add model cards. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2720/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2720",
"html_url": "https://github.com/huggingface/transformers/pull/2720",
"diff_url": "https://github.com/huggingface/transformers/pull/2720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2720.patch",
"merged_at": 1580738676000
} |
https://api.github.com/repos/huggingface/transformers/issues/2719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2719/comments | https://api.github.com/repos/huggingface/transformers/issues/2719/events | https://github.com/huggingface/transformers/issues/2719 | 558,964,825 | MDU6SXNzdWU1NTg5NjQ4MjU= | 2,719 | Error when running run_lm_finetuning.py | {
"login": "artkh24",
"id": 7457164,
"node_id": "MDQ6VXNlcjc0NTcxNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7457164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artkh24",
"html_url": "https://github.com/artkh24",
"followers_url": "https://api.github.com/users/artkh24/followers",
"following_url": "https://api.github.com/users/artkh24/following{/other_user}",
"gists_url": "https://api.github.com/users/artkh24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/artkh24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/artkh24/subscriptions",
"organizations_url": "https://api.github.com/users/artkh24/orgs",
"repos_url": "https://api.github.com/users/artkh24/repos",
"events_url": "https://api.github.com/users/artkh24/events{/privacy}",
"received_events_url": "https://api.github.com/users/artkh24/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, we would need more information to help you (all the information required in the bug template): transformers version, the full error trace, the script version.\r\n\r\nThis error is probably due to a version mismatch between your script and the transformers version you have installed.",
"I'm having a similar issue too.\r\nI updated transformers from 2.3.x to 2.4.1 today, and I'm facing a runtime error which is RuntimeError: CUDA error: device-side assert triggered.\r\nI reviewed recent updates and found out the commits [Follow up 213] is causing the error.\r\nBelow are the changes from the commits:\r\n- labels[~masked_indices] = -100 # We only compute loss on masked tokens\r\n+ labels[~masked_indices] = -1 # We only compute loss on masked tokens\r\n\r\nThe changes are related to the calculation of masked language model loss, so the problem seems to occur when args.mlm is True.\r\n\r\nAny suggestions?\r\n\r\n\r\n============================================\r\nThe full error trace\r\n============================================\r\n2020-02-03 21:36:34.839995: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll\r\n02/03/2020 21:36:38 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False\r\n02/03/2020 21:36:39 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-config.json from cache at C:\\Users\\*****\\.cache\\torch\\transformers\\e1a2a406b5a05063c31f4dfdee7608986ba7c6393f7f79db5e69dcd197208534.a7ab0e5de2d8321d6d6a15b199110f2c99be72976b7d151423cb8d8c261a13b6\r\n02/03/2020 21:36:39 - INFO - transformers.configuration_utils - Model config RobertaConfig {\r\n \"architectures\": [\r\n \"RobertaForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"do_sample\": false,\r\n \"eos_token_ids\": 0,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-05,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"roberta\",\r\n \"num_attention_heads\": 12,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"pruned_heads\": {},\r\n \"repetition_penalty\": 1.0,\r\n \"temperature\": 1.0,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 1,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 50265\r\n}\r\n\r\n02/03/2020 21:36:39 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at C:\\Users\\*****\\.cache\\torch\\transformers\\d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b\r\n02/03/2020 21:36:39 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at C:\\Users\\*****\\.cache\\torch\\transformers\\b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda\r\n02/03/2020 21:36:39 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin from cache at C:\\Users\\*****\\.cache\\torch\\transformers\\228756ed15b6d200d7cb45aaef08c087e2706f54cb912863d2efe07c89584eb7.49b88ba7ec2c26a7558dda98ca3884c3b80fa31cf43a1b1f23aef3ff81ba344e\r\n02/03/2020 21:36:44 - INFO - transformers.modeling_utils - Weights of RobertaForMaskedLM not initialized from pretrained model: ['lm_head.decoder.bias']\r\n02/03/2020 21:36:46 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='../data/wikitext-2/wiki.test.raw', evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=100, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='roberta-base', model_type='roberta', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='save', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=100, save_total_limit=1, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='../data/wikitext-2/wiki.train.raw', warmup_steps=0, weight_decay=0.0)\r\n02/03/2020 21:36:46 - INFO - __main__ - Loading features from cached file ../data/wikitext-2\\roberta_cached_lm_510_wiki.train.raw\r\n02/03/2020 21:36:46 - INFO - __main__ - ***** Running training *****\r\n02/03/2020 21:36:46 - INFO - __main__ - Num examples = 4740\r\n02/03/2020 21:36:46 - INFO - __main__ - Num Epochs = 1\r\n02/03/2020 21:36:46 - INFO - __main__ - Instantaneous batch size per GPU = 4\r\n02/03/2020 21:36:46 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4\r\n02/03/2020 21:36:46 - INFO - __main__ - Gradient Accumulation steps = 1\r\n02/03/2020 21:36:46 - INFO - __main__ - Total optimization steps = 1185\r\nEpoch: 0%| | 0/1 [00:00<?, ?it/s]\r\nIteration: 0%| | 0/1185 [00:00<?, ?it/s]C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nC:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nTraceback (most recent call last):\r\n File \"C:/Users/*****/PycharmProjects/*****/huggingface/run_lm_finetuning.py\", line 818, in <module>\r\n main()\r\n File \"C:/Users/*****/PycharmProjects/*****/huggingface/run_lm_finetuning.py\", line 768, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"C:/Users/*****/PycharmProjects/*****/huggingface/run_lm_finetuning.py\", line 356, in train\r\n loss.backward()\r\n File \"C:\\Users\\*****\\Anaconda3\\lib\\site-packages\\torch\\tensor.py\", line 195, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"C:\\Users\\*****\\Anaconda3\\lib\\site-packages\\torch\\autograd\\__init__.py\", line 99, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: CUDA error: device-side assert triggered\r\n\r\nEpoch: 0%| | 0/1 [00:00<?, ?it/s]\r\nIteration: 0%| | 0/1185 [00:00<?, ?it/s]\r\n\r\nProcess finished with exit code 1\r\n===================================================\r\n",
" Hi @LysandreJik thank you for help I updated transformers version from 2.3.0 to 2.4.1 and it started to work",
"Hi @gjgjgjik, this error shouldn't happen if you have transformers v2.4.1 and you have the updated script. Are you running the `run_lm_finetuning` script after the commit you mentioned, and against the v2.4.1 library?",
"Hi @LysandreJik, I uninstalled and re-installed transformers v2.4.1 using `pip install git+https://github.com/huggingface/transformers`, but it still happens. The `run_lm_finetuning` script that I have used is the latest one because it contains the changes from [Follow up 213]. I simply copied the whole source code from the repository. I'm still able to run GPT which is not a masked language model though.\r\n\r\nFYI,\r\nOS: Windows 10\r\nTransformers: 2.4.1\r\nPyTorch: 1.4.0\r\nTensorflow: 2.1.0",
"Alright @gjgjgjik, I'm looking into it.",
"Indeed @gjgjgjik, I got confused on this -100/-1 fix. The correct value should be -100, and I updated it in 3bf5417."
] | 1,580 | 1,580 | 1,580 | NONE | null | I'm getting the following error when trying to finetune BERT for armenian language
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at C:\w\1\s\windows\pytorch\aten\src\THNN/generic/ClassNLLCriterion.c:97
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2718/comments | https://api.github.com/repos/huggingface/transformers/issues/2718/events | https://github.com/huggingface/transformers/issues/2718 | 558,964,258 | MDU6SXNzdWU1NTg5NjQyNTg= | 2,718 | DistilBertForMaskedLM is not passing ignore_index to loss fct nn.CrossEntropyLoss | {
"login": "vuamitom",
"id": 633538,
"node_id": "MDQ6VXNlcjYzMzUzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/633538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vuamitom",
"html_url": "https://github.com/vuamitom",
"followers_url": "https://api.github.com/users/vuamitom/followers",
"following_url": "https://api.github.com/users/vuamitom/following{/other_user}",
"gists_url": "https://api.github.com/users/vuamitom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vuamitom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vuamitom/subscriptions",
"organizations_url": "https://api.github.com/users/vuamitom/orgs",
"repos_url": "https://api.github.com/users/vuamitom/repos",
"events_url": "https://api.github.com/users/vuamitom/events{/privacy}",
"received_events_url": "https://api.github.com/users/vuamitom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're absolutely correct, this was a bug. I've updated it in 239dd23.",
"Thank you :). Will close this bug. "
] | 1,580 | 1,580 | 1,580 | NONE | null | # π Bug
I'm running `run_lm_finetuning.py` and got the error below:
```
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=110 error=710 : device-side assert triggered
Traceback (most recent call last):
File "run_lm_finetuning.py", line 795, in <module>
main()
File "run_lm_finetuning.py", line 745, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 349, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/tamvm/.local/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 550, in forward
masked_lm_labels.view(-1))
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2016, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1842, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
```
Looking through the code, I realized that `-100` is used as labels for indices that are not masked. However, `DistilBertForMaskedLM` is not passing `ignore_index=-100` to `nn.CrossEntropyLoss`, which makes loss function calculate loss on `-100` labels. and hence the error.
[https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L510](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L510)
## Information
Model I am using (Bert, XLNet ...): DistilBert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* run_lm_finetuning.py
The tasks I am working on is:
* Fine tune masked language model
## To reproduce
Steps to reproduce the behavior:
```bash
python run_lm_finetuning.py \
--output_dir=finetune_output \
--model_type=distilbert \
--model_name_or_path=distilbert-base-multilingual-cased \
--do_train \
--train_data_file=./finetune_data/train.raw.txt \
--do_eval \
--eval_data_file=./finetune_data/val.raw.txt \
--mlm \
--block_size=128
```
## Expected behavior
Model should start training process without problem.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Ubuntu 18
- Python version: 3.6.9
- PyTorch version (GPU): 1.3.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2718/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2717/comments | https://api.github.com/repos/huggingface/transformers/issues/2717/events | https://github.com/huggingface/transformers/issues/2717 | 558,872,938 | MDU6SXNzdWU1NTg4NzI5Mzg= | 2,717 | error while training distilbert multilingual model | {
"login": "divyag11",
"id": 39218807,
"node_id": "MDQ6VXNlcjM5MjE4ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyag11",
"html_url": "https://github.com/divyag11",
"followers_url": "https://api.github.com/users/divyag11/followers",
"following_url": "https://api.github.com/users/divyag11/following{/other_user}",
"gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyag11/subscriptions",
"organizations_url": "https://api.github.com/users/divyag11/orgs",
"repos_url": "https://api.github.com/users/divyag11/repos",
"events_url": "https://api.github.com/users/divyag11/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyag11/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"pls reply to above",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,587 | 1,587 | NONE | null | hi,
I am trying to finetune distilbert multilingual cased model, but i am getting error while training the model:
while with the same code using distilbert uncased , there is no such error.
Can you please check if there is some problem with distilbert multilingual cased model?
error is:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 8 array(s), for inputs ['output_1', 'output_2', 'output_3', 'output_4', 'output_5', 'output_6', 'output_7', 'output_8'] but instead got the following list of 1 arrays: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=int64>]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2717/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2716/comments | https://api.github.com/repos/huggingface/transformers/issues/2716/events | https://github.com/huggingface/transformers/pull/2716 | 558,766,640 | MDExOlB1bGxSZXF1ZXN0MzcwMDYyMzQ0 | 2,716 | Added README.md to Swedish BERT models from National Library of Sweden | {
"login": "marma",
"id": 144026,
"node_id": "MDQ6VXNlcjE0NDAyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/144026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marma",
"html_url": "https://github.com/marma",
"followers_url": "https://api.github.com/users/marma/followers",
"following_url": "https://api.github.com/users/marma/following{/other_user}",
"gists_url": "https://api.github.com/users/marma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marma/subscriptions",
"organizations_url": "https://api.github.com/users/marma/orgs",
"repos_url": "https://api.github.com/users/marma/repos",
"events_url": "https://api.github.com/users/marma/events{/privacy}",
"received_events_url": "https://api.github.com/users/marma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=h1) Report\n> Merging [#2716](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ba147ecffa28e5a4f96eebd09dcd642117dedae?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2716 +/- ##\n=======================================\n Coverage 74.09% 74.09% \n=======================================\n Files 93 93 \n Lines 15248 15248 \n=======================================\n Hits 11298 11298 \n Misses 3950 3950\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=footer). Last update [2ba147e...e46c8bf](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @marma! \r\n\r\nAs also mentioned on https://github.com/huggingface/transformers/pull/2720#issuecomment-581430234, we'll find a way for users to get merge rights on their model cards (via a GitHub bot maybe)\r\n"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | Following the lead of others these are not actual model cards but rather the README.md-files from https://github.com/Kungbib/swedish-bert-models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2716",
"html_url": "https://github.com/huggingface/transformers/pull/2716",
"diff_url": "https://github.com/huggingface/transformers/pull/2716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2716.patch",
"merged_at": 1580738975000
} |
https://api.github.com/repos/huggingface/transformers/issues/2715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2715/comments | https://api.github.com/repos/huggingface/transformers/issues/2715/events | https://github.com/huggingface/transformers/pull/2715 | 558,758,519 | MDExOlB1bGxSZXF1ZXN0MzcwMDU2Mzg1 | 2,715 | Optimize causal mask using torch.where | {
"login": "Akababa",
"id": 4205182,
"node_id": "MDQ6VXNlcjQyMDUxODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4205182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Akababa",
"html_url": "https://github.com/Akababa",
"followers_url": "https://api.github.com/users/Akababa/followers",
"following_url": "https://api.github.com/users/Akababa/following{/other_user}",
"gists_url": "https://api.github.com/users/Akababa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Akababa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Akababa/subscriptions",
"organizations_url": "https://api.github.com/users/Akababa/orgs",
"repos_url": "https://api.github.com/users/Akababa/repos",
"events_url": "https://api.github.com/users/Akababa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Akababa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=h1) Report\n> Merging [#2715](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33ef7002e17fe42b276dc6d36c07a3c39b1f09ed?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2715 +/- ##\n==========================================\n- Coverage 77.8% 77.79% -0.02% \n==========================================\n Files 100 100 \n Lines 17051 17052 +1 \n==========================================\n- Hits 13267 13266 -1 \n- Misses 3784 3786 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.2% <100%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.15% <0%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=footer). Last update [33ef700...a54a418](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks β what's the PyTorch compatibility on this?",
"Not sure about that, where can I find more info on compatibility? I think it only relies on torch.where (introduced <= 1.0.0) and tensors of dtype torch.bool (introduced in 1.2.0). Does the None (newaxis) slicing introduce compatibility issues?\r\n\r\nIf we want to maintain compatibility with 1.0.0, I think we can use torch.uint8 instead of torch.bool.",
"Hi, I'd recommend to make the following changes:\r\n1. Keep the original shapes of _bias_ buffer (because otherwise it breaks loading of already trained models) and make dtype equal to torch.uint8, so it'd be compatible with pytorch 1.0.0 as no torch.bool type available.\r\n`self.register_buffer(\"bias\", torch.tril(torch.ones((n_ctx, n_ctx), dtype=torch.uint8)).view(1, 1, n_ctx, n_ctx))`\r\n2. Keep -1e4 constant in a buffer to reduce allocations on each _attn call and make it works automatically with different devices (CPU and CUDA):\r\n`self.register_buffer(\"masked_bias\", torch.tensor(-1e4))`\r\n3. Keep `b = self.bias[:, :, ns - nd : ns, :ns]` line as _bias_ buffer have the original shape now\r\n4. So the _where_ statement should look like `w = torch.where(b, w, self.masked_bias)`\r\n\r\nAs a result, overall speedup will be at 10-15% here as I measured, and the code should be 100% compatible with pytorch 1.0.0",
"Hi @Akababa, \r\n\r\nThanks for the PR. I think this is a great change. I checked and it does lead to a significant speed-up :-) \r\n\r\nCould you fix the tests and I think then we can merge (see https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)\r\n\r\n1) You should fetch the master branch and rebase your branch on top of it.\r\n2) Make sure to run `make style` in the root folder before pushing to pass the \"check_code_quality\" test.",
"Great work @Akababa - this looks good to me! \r\n\r\n@LysandreJik @thomwolf - could you check and merge? ",
"Checked slow hardcoded GPT2 tests and it looks all good!"
] | 1,580 | 1,586 | 1,586 | CONTRIBUTOR | null | Instead of multiplying by 1.0 float mask, use torch.where with a bool mask for increased performance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2715/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2715",
"html_url": "https://github.com/huggingface/transformers/pull/2715",
"diff_url": "https://github.com/huggingface/transformers/pull/2715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2715.patch",
"merged_at": 1586290759000
} |
https://api.github.com/repos/huggingface/transformers/issues/2714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2714/comments | https://api.github.com/repos/huggingface/transformers/issues/2714/events | https://github.com/huggingface/transformers/issues/2714 | 558,751,600 | MDU6SXNzdWU1NTg3NTE2MDA= | 2,714 | How to add Dense layer on top of TFBertForSequenceClassification model? | {
"login": "sainimohit23",
"id": 26195811,
"node_id": "MDQ6VXNlcjI2MTk1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26195811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sainimohit23",
"html_url": "https://github.com/sainimohit23",
"followers_url": "https://api.github.com/users/sainimohit23/followers",
"following_url": "https://api.github.com/users/sainimohit23/following{/other_user}",
"gists_url": "https://api.github.com/users/sainimohit23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sainimohit23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sainimohit23/subscriptions",
"organizations_url": "https://api.github.com/users/sainimohit23/orgs",
"repos_url": "https://api.github.com/users/sainimohit23/repos",
"events_url": "https://api.github.com/users/sainimohit23/events{/privacy}",
"received_events_url": "https://api.github.com/users/sainimohit23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Found the solution on [1936](https://github.com/huggingface/transformers/issues/1936). Closing.",
"Can you write down the solution here? @sainimohit23 "
] | 1,580 | 1,597 | 1,580 | NONE | null | I am having a really hard time adding the dense layers on the top of this model. I have tried to add the layers of `TFBertForSequenceClassification` in a sequential model with some dense layers like this:
```
bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=config)
model = keras.models.Sequential()
model.add(bert_model.layers[0])
model.add(keras.layers.Dense(10, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
```
But when I fit the model using:
```
model.fit(
[padded, attention_mask],
[np.array(df[1][:2000])],
epochs=100,
)
```
I am getting this error:
```AttributeError: 'list' object has no attribute 'shape'```
I have also tried to use the layers of the `TFBertForSequenceClassification` in `keras.models.Model() class`. But, again there is no way to get the input layer. For example using `bert_model.layers[0].input_shape` gives the following error:
```
1571 """
1572 if not self._inbound_nodes:
-> 1573 raise AttributeError('The layer has never been called '
1574 'and thus has no defined input shape.')
1575 all_input_shapes = set(
AttributeError: The layer has never been called and thus has no defined input shape.
```
What is the right way to add layers on the top of this model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2713/comments | https://api.github.com/repos/huggingface/transformers/issues/2713/events | https://github.com/huggingface/transformers/issues/2713 | 558,713,444 | MDU6SXNzdWU1NTg3MTM0NDQ= | 2,713 | Weights of FlaubertForQuestionAnswering not initialized from pretrained model | {
"login": "gqoew",
"id": 32342701,
"node_id": "MDQ6VXNlcjMyMzQyNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/32342701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gqoew",
"html_url": "https://github.com/gqoew",
"followers_url": "https://api.github.com/users/gqoew/followers",
"following_url": "https://api.github.com/users/gqoew/following{/other_user}",
"gists_url": "https://api.github.com/users/gqoew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gqoew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gqoew/subscriptions",
"organizations_url": "https://api.github.com/users/gqoew/orgs",
"repos_url": "https://api.github.com/users/gqoew/repos",
"events_url": "https://api.github.com/users/gqoew/events{/privacy}",
"received_events_url": "https://api.github.com/users/gqoew/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The Flaubert checkpoints contain the base transformer model, not the weights for question answering (similar to most checkpoints). The point of the `run_squad` script is to fine-tune the weights of the additional question answering head to the specific task (french squad in your case).",
"Hi @LysandreJik \r\n\r\n1. Actually I had the intiuition something was wrong because I had this \"missing weights\" message again after QA training during evaluation step, and all evaluation metrics were equal to 0... like if the learned weights during QA training were not loaded at evaluation step? How to make sure eval step loads the learned weights?\r\n\r\n2. I just retried running the command above (training + eval) from a fresh env and now I have a new issue:\r\n\r\n```python-traceback\r\nconvert squad examples to features: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ$\r\n| 84943/84943 [52:24<00:00, 27.01it/s]\r\nadd example index and unique id: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 8$\r\n943/84943 [00:00<00:00, 587500.52it/s]\r\n02/09/2020 14:42:05 - INFO - __main__ - Saving features into cached file ./cached_train_flaubert-base-uncased_384\r\n02/09/2020 14:44:49 - INFO - __main__ - ***** Running training *****\r\n02/09/2020 14:44:49 - INFO - __main__ - Num examples = 87016\r\n02/09/2020 14:44:49 - INFO - __main__ - Num Epochs = 2\r\n02/09/2020 14:44:49 - INFO - __main__ - Instantaneous batch size per GPU = 3\r\n02/09/2020 14:44:49 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 3\r\n02/09/2020 14:44:49 - INFO - __main__ - Gradient Accumulation steps = 1\r\n02/09/2020 14:44:49 - INFO - __main__ - Total optimization steps = 58012\r\nEpoch: 0%| \r\n | 0/2 [00:00<?, ?it/sTraceback (most recent call last): \r\n | 0/29006 [00:00<?, ?it/s]\r\n File \"./examples/run_squad.py\", line 857, in <module>\r\n main()\r\n File \"./examples/run_squad.py\", line 796, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"./examples/run_squad.py\", line 231, in train\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/transformers/modeling_xlm.py\", line 1036, in forward\r\n inputs_embeds=inputs_embeds,\r\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/transformers/modeling_flaubert.py\", line 235, in forward\r\n tensor = tensor + self.lang_embeddings(langs)\r\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 576, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'FlaubertModel' object has no attribute 'lang_embeddings'\r\nEpoch: 0%| \r\n | 0/2 [00:00<?, ?it/s]\r\nIteration: 0%| \r\n | 0/29006 [00:00<?, ?it/s]\r\n```\r\n\r\nIt seems `lang_embeddings` is not available in `FlaubertModel`: \r\n\r\nhttps://github.com/huggingface/transformers/blob/d426b58b9e32a2ffc8c8a1196143270e22054a46/src/transformers/modeling_flaubert.py#L229-L240\r\n\r\nIt is declared in XLM:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d426b58b9e32a2ffc8c8a1196143270e22054a46/src/transformers/modeling_xlm.py#L358-L365\r\n\r\nDo you have any idea how to fix this? Thanks!",
"Hi, \r\n\r\n1) Indeed, you should not have gotten these warnings if the model loaded was the one that you just trained.\r\n\r\n2) This should have been fixed with https://github.com/huggingface/transformers/commit/cfb7d108bd4ad067a03faf15255a6ea55a6c8d39, could you install from source and let me know if it fixes your issue?",
"@LysandreJik It's fixed now! Thank you π ",
"Great to hear! "
] | 1,580 | 1,581 | 1,581 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...):
`Flaubert`
Language I am using the model on (English, Chinese ...):
`French`
The tasks I am working on is: **Fine-tune Flaubert on French-translated SQuAD**
The problem arises when using:
```
python3 ./examples/run_squad.py \
--model_type flaubert \
--model_name_or_path flaubert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file SQuAD-v1.1-train_fr_ss999_awstart2_net.json \
--predict_file SQuAD-v1.1-dev_fr_ss999_awstart2_net.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir output \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3
```
For some reason the downloaded weights from pre-trained model `flaubert-base-uncased` are not initialized for training:
```python-traceback
2/02/2020 15:10:53 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
02/02/2020 15:10:53 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/config.json from cache at /root/.cache/torch/transformers/d1cf66823bb82e0ef671e7bae75bf86161cbf8ca218f893bc0129599e6e40c2a.e40562626242ae71bf0ce9aa0832297b724c4859407a09771341048981bb3736
02/02/2020 15:10:53 - INFO - transformers.configuration_utils - Model config FlaubertConfig {
"amp": 1,
"architectures": [
"FlaubertWithLMHeadModel"
],
"asm": false,
"attention_dropout": 0.1,
"bos_index": 0,
"bos_token_id": 0,
"bptt": 512,
"causal": false,
"clip_grad_norm": 5,
"do_sample": false,
"dropout": 0.1,
"emb_dim": 768,
"embed_init_std": 0.02209708691207961,
"encoder_only": true,
"end_n_top": 5,
"eos_index": 1,
"eos_token_ids": 0,
"finetuning_task": null,
"fp16": true,
"gelu_activation": true,
"group_by_size": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"id2lang": {
"0": "fr"
},
"init_std": 0.02,
"is_decoder": false,
"is_encoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"lang2id": {
"fr": 0
},
"lang_id": 0,
"langs": [
"fr"
],
"layer_norm_eps": 1e-12,
"layerdrop": 0.0,
"length_penalty": 1.0,
"lg_sampling_factor": -1,
"lgs": "fr",
"mask_index": 5,
"mask_token_id": 0,
"max_batch_size": 0,
"max_length": 20,
"max_position_embeddings": 512,
"max_vocab": -1,
"mlm_steps": [
[
"fr",
null
]
],
"model_type": "flaubert",
"n_heads": 12,
"n_langs": 1,
"n_layers": 12,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_index": 2,
"pad_token_id": 0,
"pre_norm": false,
"pruned_heads": {},
"repetition_penalty": 1.0,
"sample_alpha": 0,
"share_inout_emb": true,
"sinusoidal_embeddings": false,
"start_n_top": 5,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "first",
"summary_use_proj": true,
"temperature": 1.0,
"tokens_per_batch": -1,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"unk_index": 3,
"use_bfloat16": false,
"use_lang_emb": true,
"vocab_size": 67542,
"word_blank": 0,
"word_dropout": 0,
"word_keep": 0.1,
"word_mask": 0.8,
"word_mask_keep_rand": "0.8,0.1,0.1",
"word_pred": 0.15,
"word_rand": 0.1,
"word_shuffle": 0
}
02/02/2020 15:10:53 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/vocab.json from cache at /root/.cache/torch/transformers/8f54ff51875f0422a9c265ab77515058f2655b901caa5f8ff19954c8a126a2fe.4dbbb80764d7ce5ea8639cef2ffdf2c6be3c491192c042bba9651d56b917d49c
02/02/2020 15:10:53 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/merges.txt from cache at /root/.cache/torch/transformers/42f0fe2cd5eebb0c450bd936b0104b27c21e33138b445e9c7124094e05df02f6.5e19e4f2e2e9e11ecde5cc44c2c65f0dc11671ff5dfcd0066699e64bbc7c5a8d
02/02/2020 15:10:53 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/pytorch_model.bin from cache at /root/.cache/torch/transformers/2931084022a5d35320c07628cb7de631bdefe38f0e87d5d48a9e04be799ce0ef.8a02ed26eb9bc391a8fd64b6acce3b2167eb7a01cd4365502dca3a5980918425
02/02/2020 15:11:00 - INFO - transformers.modeling_utils - Weights of FlaubertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.start_logits.dense.bias', 'qa_outputs.start_logits.dense.weight', 'qa_outputs.end_logits.dense_0.bias', 'qa_outputs.end_logits.dense_0.weight', 'qa_outputs.end_logits.LayerNorm.bias', 'qa_outputs.end_logits.LayerNorm.weight', 'qa_outputs.end_logits.dense_1.bias', 'qa_outputs.end_logits.dense_1.weight', 'qa_outputs.answer_class.dense_0.bias', 'qa_outputs.answer_class.dense_0.weight', 'qa_outputs.answer_class.dense_1.weight']
02/02/2020 15:11:00 - INFO - transformers.modeling_utils - Weights from pretrained model not used in FlaubertForQuestionAnswering: ['pred_layer.proj.bias', 'pred_layer.proj.weight']
02/02/2020 15:11:06 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir=None, device=device(type='cuda'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=500, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='flaubert-base-uncased', model_type='flaubert', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=3, per_gpu_train_batch_size=3, predict_file='SQuAD-v1.1-dev_fr_ss999_awstart2_net.json', save_steps=500, seed=42, server_ip='', server_port='', threads=1, tokenizer_name='', train_file='SQuAD-v1.1-train_fr_ss999_awstart2_net.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0)
02/02/2020 15:11:06 - INFO - __main__ - Creating features from dataset file at .
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 442/442 [00:42<00:00, 10.49it/s]
convert squad examples to features: 4%|ββ | 3457/84943 [02:18<37:30, 36.21it/s]
```
## To reproduce
Steps to reproduce the behavior:
Edit `run_squad.py` to support `flaubert`:
```python
MODEL_CLASSES = {
"bert": (BertConfig, BertForQuestionAnswering, BertTokenizer),
"roberta": (RobertaConfig, RobertaForQuestionAnswering, RobertaTokenizer),
"xlnet": (XLNetConfig, XLNetForQuestionAnswering, XLNetTokenizer),
"xlm": (XLMConfig, XLMForQuestionAnswering, XLMTokenizer),
"distilbert": (DistilBertConfig, DistilBertForQuestionAnswering, DistilBertTokenizer),
"albert": (AlbertConfig, AlbertForQuestionAnswering, AlbertTokenizer),
"flaubert": (FlaubertConfig, FlaubertForQuestionAnswering, FlaubertTokenizer),
}
```
I had to do some other little edits. Then execute script:
```bash
python3 ./examples/run_squad.py \
--model_type flaubert \
--model_name_or_path flaubert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file SQuAD-v1.1-train_fr_ss999_awstart2_net.json \
--predict_file SQuAD-v1.1-dev_fr_ss999_awstart2_net.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir output \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3
```
Dataset available here: https://github.com/Alikabbadj/French-SQuAD
## Expected behavior
Load weights from pre-trained `flaubert-base-uncased` model to fine-tune on FR SQuAD train then use new trained weights to evaluate model on FR SQuAD dev.
## Environment info
- `transformers` version: `transformers==2.4.1`
- Platform: `Deep Learning AMI (Ubuntu 16.04) Version 26.0 `
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2713/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.