url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/7721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7721/comments | https://api.github.com/repos/huggingface/transformers/issues/7721/events | https://github.com/huggingface/transformers/issues/7721 | 719,128,083 | MDU6SXNzdWU3MTkxMjgwODM= | 7,721 | can u help me out with how to input custom data files in RAG retriever and the data format | {
"login": "Madhuri05thorat",
"id": 56575589,
"node_id": "MDQ6VXNlcjU2NTc1NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/56575589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Madhuri05thorat",
"html_url": "https://github.com/Madhuri05thorat",
"followers_url": "https://api.github.com/users/Madhuri05thorat/followers",
"following_url": "https://api.github.com/users/Madhuri05thorat/following{/other_user}",
"gists_url": "https://api.github.com/users/Madhuri05thorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Madhuri05thorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Madhuri05thorat/subscriptions",
"organizations_url": "https://api.github.com/users/Madhuri05thorat/orgs",
"repos_url": "https://api.github.com/users/Madhuri05thorat/repos",
"events_url": "https://api.github.com/users/Madhuri05thorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/Madhuri05thorat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | sign me up, Sam
_Originally posted by @stas00 in https://github.com/huggingface/transformers/issues/7715#issuecomment-706769068_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7721/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7720/comments | https://api.github.com/repos/huggingface/transformers/issues/7720/events | https://github.com/huggingface/transformers/pull/7720 | 719,009,517 | MDExOlB1bGxSZXF1ZXN0NTAxMjcwMzE1 | 7,720 | Fix trainer callback | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Fix a bug that happends when subclassing Trainer and
overwriting evaluate() without calling prediciton_loop()
# What does this PR do?
Fixes #7702
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7720/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7720",
"html_url": "https://github.com/huggingface/transformers/pull/7720",
"diff_url": "https://github.com/huggingface/transformers/pull/7720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7720.patch",
"merged_at": 1602503113000
} |
https://api.github.com/repos/huggingface/transformers/issues/7719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7719/comments | https://api.github.com/repos/huggingface/transformers/issues/7719/events | https://github.com/huggingface/transformers/issues/7719 | 718,966,706 | MDU6SXNzdWU3MTg5NjY3MDY= | 7,719 | wrong decoder_input_ids[:,0] for MarianMT models ? | {
"login": "sweta20",
"id": 11375341,
"node_id": "MDQ6VXNlcjExMzc1MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/11375341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sweta20",
"html_url": "https://github.com/sweta20",
"followers_url": "https://api.github.com/users/sweta20/followers",
"following_url": "https://api.github.com/users/sweta20/following{/other_user}",
"gists_url": "https://api.github.com/users/sweta20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sweta20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sweta20/subscriptions",
"organizations_url": "https://api.github.com/users/sweta20/orgs",
"repos_url": "https://api.github.com/users/sweta20/repos",
"events_url": "https://api.github.com/users/sweta20/events{/privacy}",
"received_events_url": "https://api.github.com/users/sweta20/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is the intended behavior.\r\nIt is very counterintuitive to me as well, but changing `prepare_seq2seq_batch` or `shift_tokens_right` to end up with `decoder_start_token_id` at the 0th position of `decoder_input_ids` seems to lead to worse fine-tuning performance.\r\n\r\nIf you have evidence to the contrary, I would be happy to change it.",
"I'm running the experiment now on wmt-en-ro, we'll see how it goes!",
"Thanks! I was using the pre-trained model for scoring (src, tgt) pairs and didn't actually get a chance to check the impact on finetuning yet. ",
"I ran this last night, and finetuning loss was identical with and without the change.\r\nBleu was within 0.1 (`master` was slightly higher).\r\nHere is the branch if you want to play with it: https://github.com/sshleifer/transformers_fork/tree/hack-batches-v2"
] | 1,602 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux
- Python version: 3.7.0
- PyTorch version (GPU?): 1.6.0
- Using GPU in script?: Yes
### Who can help: @sshleifer
## Information
Model I am using (Bert, XLNet ...): MarianMTModel
The problem arises when using:
* [x] the official example scripts: (give details below)
```from transformers import MarianTokenizer
tok = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')
src_texts = [ "I am a small frog.", "Tom asked his teacher for advice."]
tgt_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] # optional
batch_enc: BatchEncoding = tok.prepare_seq2seq_batch(src_texts, tgt_texts=tgt_texts)
# model(**batch) should work
```
model(**batch) doesn't work as intended because [shift_tokens_right](https://github.com/huggingface/transformers/blob/03ec02a667d5ed3075ea65b9f89ef7135e97f6b4/src/transformers/modeling_bart.py#L226) adds eos token to generate the target sequence.
```
shift_tokens_right(batch["labels"], model.config.pad_token_id)
```
returns
```
[[0, 105, 495, 53, 5324, 17279, 649, 3],
[0, 2136, 8818, 715, 5832, 91, 688, 3]]
```
instead of
```
[[58100, 105, 495, 53, 5324, 17279, 649, 3],
[58100, 2136, 8818, 715, 5832, 91, 688, 3]]
```
Here, "58100" is the decoder_start_token_id. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7718/comments | https://api.github.com/repos/huggingface/transformers/issues/7718/events | https://github.com/huggingface/transformers/pull/7718 | 718,958,367 | MDExOlB1bGxSZXF1ZXN0NTAxMjMwMzg0 | 7,718 | fixed typo in warning line 207. | {
"login": "Berowne",
"id": 23649143,
"node_id": "MDQ6VXNlcjIzNjQ5MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23649143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Berowne",
"html_url": "https://github.com/Berowne",
"followers_url": "https://api.github.com/users/Berowne/followers",
"following_url": "https://api.github.com/users/Berowne/following{/other_user}",
"gists_url": "https://api.github.com/users/Berowne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Berowne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Berowne/subscriptions",
"organizations_url": "https://api.github.com/users/Berowne/orgs",
"repos_url": "https://api.github.com/users/Berowne/repos",
"events_url": "https://api.github.com/users/Berowne/events{/privacy}",
"received_events_url": "https://api.github.com/users/Berowne/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | replace 'men_len' with 'mem_len' to match parameter name
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7718",
"html_url": "https://github.com/huggingface/transformers/pull/7718",
"diff_url": "https://github.com/huggingface/transformers/pull/7718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7718.patch",
"merged_at": 1602489539000
} |
https://api.github.com/repos/huggingface/transformers/issues/7717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7717/comments | https://api.github.com/repos/huggingface/transformers/issues/7717/events | https://github.com/huggingface/transformers/pull/7717 | 718,947,711 | MDExOlB1bGxSZXF1ZXN0NTAxMjIyMTE4 | 7,717 | The input training data files (multiple files in glob format). | {
"login": "kfkelvinng",
"id": 1923847,
"node_id": "MDQ6VXNlcjE5MjM4NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1923847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kfkelvinng",
"html_url": "https://github.com/kfkelvinng",
"followers_url": "https://api.github.com/users/kfkelvinng/followers",
"following_url": "https://api.github.com/users/kfkelvinng/following{/other_user}",
"gists_url": "https://api.github.com/users/kfkelvinng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kfkelvinng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kfkelvinng/subscriptions",
"organizations_url": "https://api.github.com/users/kfkelvinng/orgs",
"repos_url": "https://api.github.com/users/kfkelvinng/repos",
"events_url": "https://api.github.com/users/kfkelvinng/events{/privacy}",
"received_events_url": "https://api.github.com/users/kfkelvinng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Very often Corpus comes in [split files (book-large-p1.txt, book-large-p2.txt)](https://huggingface.co/datasets/bookcorpus); Also splitting large files to smaller files can prevent [language_modeling's tokenizer going out of memory](https://github.com/huggingface/transformers/blob/6303b5a7185fba43830db0cbb06c61861f57ddff/src/transformers/data/datasets/language_modeling.py#L67) in environment like Colab that does not have swap memory and limited to Standard (12Gb) or High RAM (25Gb) instances.
To avoid making any assumption and prematurely truncate the file to avoid such error, we add support to concatenate training data on Dataset level. User can split files to multiple 512Mb, in that case language_modeling's tokenizer would be less to go out of memory.
In addition, it would even further enhance the memory limitation if we keep [pytorch tensor in memory](https://github.com/huggingface/transformers/blob/6303b5a7185fba43830db0cbb06c61861f57ddff/src/transformers/data/datasets/language_modeling.py#L88) instead of python list. We leave this in future work.
@LysandreJik @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7717",
"html_url": "https://github.com/huggingface/transformers/pull/7717",
"diff_url": "https://github.com/huggingface/transformers/pull/7717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7717.patch",
"merged_at": 1602503042000
} |
https://api.github.com/repos/huggingface/transformers/issues/7716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7716/comments | https://api.github.com/repos/huggingface/transformers/issues/7716/events | https://github.com/huggingface/transformers/issues/7716 | 718,931,529 | MDU6SXNzdWU3MTg5MzE1Mjk= | 7,716 | Hosted Inference API for Token Classification doesn't Highlight Tokens correctly | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"@julien-c @mfuntowicz any insights or updates for this issue ?",
"I see the issue, but not sure how to best fix it to be honest – as it seems a very specific problem (token classification models that classify the special tokens as non-`O`)\r\n\r\nWhen we run fast tokenizers by default we'll get token alignment offsets into the original inputs, so that might solve this issue elegantly.\r\n\r\nMay I ask what's your use case here and do you need this use case supported by the inference widget (and at which horizon)?",
"Our main project called [ProtTrans](https://github.com/agemagician/ProtTrans), which trains various language modelling models for protein sequences at large scale.\r\n\r\nThis specific use case predicts the secondary structure for protein sequences. It is one step behind predicting the 3D structure of protein sequences (like Google AlphaFold) that allows companies to find a drug or a cure for a virus like Covid-19.\r\n\r\nFor us we want to use the inference widget to show a live example for the prediction power of our fine-tuned models on different tasks. Later, companies or researchers might need to use it at large scale to make this prediction using your APIs.\r\n\r\nHopefully, this anwer your question 😄 \r\n\r\nReferences:\r\nhttps://blogs.nvidia.com/blog/2020/07/16/ai-reads-proteins-covid/\r\nhttps://www.youtube.com/watch?v=04E3EjsQLYo&t=89s",
"👍 Oh yes I know (and love) your project and general goal/use case. I was referring to the specific use of the inference widget.\r\n\r\nI'll see what we can do. Out of curiosity, any specific reason you trained with special tokens (vs. just the raw sequence)? To be able to also do document-level classification from the same pretrained model?",
"The original pretrained model [ProtBert-BFD](prot_bert_bfd) was trained using [Google Bert script](https://github.com/google-research/bert) on TPU, which automatically add these special tokens.\r\n\r\nThis allows us to perform also document-level classification as you mentioned. Like [ProtBert-BFD-MS](https://huggingface.co/Rostlab/prot_bert_bfd_membrane?text=M+G+L+P+V+S+W+A+P+P+A+L+W+V+L+G+C+C+A+L+L+L+S+L+W+A+L+C+T+A+C+R+R+P+E+D+A+V+A+P+R+K+R+A+R+R+Q+R+A+R+L+Q+G+S+A+T+A+A+E+A+S+L+L+R+R+T+H+L+C+S+L+S+K+S+D+T+R+L+H+E+L+H+R+G+P+R+S+S+R+A+L+R+P+A+S+M+D+L+L+R+P+H+W+L+E+V+S+R+D+I+T+G+P+Q+A+A+P+S+A+F+P+H+Q+E+L+P+R+A+L+P+A+A+A+A+T+A+G+C+A+G+L+E+A+T+Y+S+N+V+G+L+A+A+L+P+G+V+S+L+A+A+S+P+V+V+A+E+Y+A+R+V+Q+K+R+K+G+T+H+R+S+P+Q+E+P+Q+Q+G+K+T+E+V+T+P+A+A+Q+V+D+V+L+Y+S+R+V+C+K+P+K+R+R+D+P+G+P+T+T+D+P+L+D+P+K+G+Q+G+A+I+L+A+L+A+G+D+L+A+Y+Q+T+L+P+L+R+A+L+D+V+D+S+G+P+L+E+N+V+Y+E+S+I+R+E+L+G+D+P+A+G+R+S+S+T+C+G+A+G+T+P+P+A+S+S+C+P+S+L+G+R+G+W+R+P+L+P+A+S+L+P) fine-tuned model.\r\n\r\nWe found out that using also the special tokens during fine-tuning [ProtBert-BFD-SS3](https://huggingface.co/Rostlab/prot_bert_bfd_ss3) model perform better than not using it. I would assume because: 1)The positional encoding. 2) It matches the original Bert training method. 3) you recommended to use it in your token classification example :)\r\n\r\nThanks in advance for looking into this issue.",
"not to keep pushing my own PR https://github.com/huggingface/transformers/pull/5970 but this solves some existing problems related to NER pipelines. The current hold-up is whether or not this provides a general enough solution for various models/langs [*](https://github.com/huggingface/transformers/pull/5970#discussion_r504519659).\r\nIf fast tokenizers are supported by all I can switch to a better implementation on the pipeline too, but at the current state I don't have an alternative. (suggestions are welcome)",
"@julien-c Any progress for this issue ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,610 | 1,610 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Model Cards: @julien-c
examples/token-classification: @stefan-it
## Information
Model I am using (Bert, XLNet ...):
Bert
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
https://huggingface.co/Rostlab/prot_bert_bfd_ss3?text=N+L+Y+I+Q+W+L+K+D+G+G+P+S+S+G+R+P+P+P+S
https://huggingface.co/Rostlab/prot_bert_bfd_ss3?text=T+G+N+L+Y+I+Q+W+L+K+D+G+G+P+S+S+G+R+P+P+P+S+A+T+G
## Expected behavior
When the Hosted inference API finds a token tag for special tokens like "[CLS] and [SEP]" also occurs with next or previous tokens, it doesn't highlight it and tag it properly.
Example:
<img width="617" alt="Screenshot 2020-10-11 at 23 31 35" src="https://user-images.githubusercontent.com/6087313/95690715-f17aba80-0c19-11eb-803d-c439cd9b137d.png">
Because token "N" had the same token group as the previous special token "[CLS]", it was not highlighted. However, it was detected correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7716/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7715/comments | https://api.github.com/repos/huggingface/transformers/issues/7715/events | https://github.com/huggingface/transformers/issues/7715 | 718,927,476 | MDU6SXNzdWU3MTg5Mjc0NzY= | 7,715 | examples/rag: test coverage, tiny model | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 2373468354,
"node_id": "MDU6TGFiZWwyMzczNDY4MzU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/rag",
"name": "rag",
"color": "e58e85",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"sign me up, Sam",
"@sshleifer \r\n\r\nexamples/rag/finetune.py is not that stable. Seems like it depends on the pytorch_lightning version also. It would be nice if we can test it properly. ",
"I think this is still waiting for: https://github.com/huggingface/transformers/issues/8284 to complete the missing info and perhaps some tests were added since then?\r\n",
"This issue has been stale for 1 month.",
"As the required to implement this info was never provided and I since then moved to work on other things I removed self-assignment to this ticket...",
"Should this issue be closed looks like [rag](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag) now has some tests [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/_test_finetune_rag.py) and [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/test_distributed_retriever.py)."
] | 1,602 | 1,705 | null | CONTRIBUTOR | null | Disclaimer: I don't know this code very well, this may be much harder than it seems.
Blocking PR: #7713
[`examples/rag/finetune.py`, `examples/rag/finetune.sh`, `eval_rag.py`] do not seem to be tested at all.
It would be good to have a `test_finetune.py` like `examples/seq2seq` that tested these.
cc @stas00 if interested, rag is a cool new retrieval model https://arxiv.org/pdf/2005.11401.pdf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7715/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7714/comments | https://api.github.com/repos/huggingface/transformers/issues/7714/events | https://github.com/huggingface/transformers/pull/7714 | 718,924,419 | MDExOlB1bGxSZXF1ZXN0NTAxMjA0NDMw | 7,714 | Fix typo in all model docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | COLLABORATOR | null | # What does this PR do?
Like #7703 but for all other models (maked-> masked) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7714/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7714",
"html_url": "https://github.com/huggingface/transformers/pull/7714",
"diff_url": "https://github.com/huggingface/transformers/pull/7714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7714.patch",
"merged_at": 1602490020000
} |
https://api.github.com/repos/huggingface/transformers/issues/7713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7713/comments | https://api.github.com/repos/huggingface/transformers/issues/7713/events | https://github.com/huggingface/transformers/issues/7713 | 718,924,264 | MDU6SXNzdWU3MTg5MjQyNjQ= | 7,713 | rag examples tests fail | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ```
================================================================ ERRORS =================================================================
______________________________________ ERROR collecting examples/rag/test_distributed_retriever.py ______________________________________
ImportError while importing test module '/Users/shleifer/transformers_fork/examples/rag/test_distributed_retriever.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
examples/rag/test_distributed_retriever.py:26: in <module>
from examples.rag.distributed_retriever import RagPyTorchDistributedRetriever # noqa: E402 # isort:skip
E ModuleNotFoundError: No module named 'examples.rag'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7713/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7712/comments | https://api.github.com/repos/huggingface/transformers/issues/7712/events | https://github.com/huggingface/transformers/pull/7712 | 718,924,126 | MDExOlB1bGxSZXF1ZXN0NTAxMjA0MjMy | 7,712 | fix examples/rag imports, tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stas00 does this look reasonable? Am I missing anything you did for `examples/seq2seq`?",
"yes, except you shouldn't need \r\n```\r\nsys.path.append(os.path.join(os.getcwd())) # noqa: E402 # noqa: E402 # isort:skip\r\n```\r\nthat's what `__init__.py` already did for you.\r\n\r\nand once removed, all those subsequent import `# noqa:` comments can be removed too. (except PL import)\r\n\r\nbesides using `cwd` is a bad idea - who knows where the script is invoked from and not just from the same dir as the script itself. Use `__file__` instead, which is deterministic. but it's not needed here.\r\n",
"Scripts work, will merge on CI success.",
"btw, you can also remove most of the `# noqa: E402 # isort:skipq` as they are no longer needed."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Before
```bash
pytest examples/rag
```
fails with
```
================================================================ ERRORS =================================================================
______________________________________ ERROR collecting examples/rag/test_distributed_retriever.py ______________________________________
ImportError while importing test module '/Users/shleifer/transformers_fork/examples/rag/test_distributed_retriever.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
examples/rag/test_distributed_retriever.py:26: in <module>
from examples.rag.distributed_retriever import RagPyTorchDistributedRetriever # noqa: E402 # isort:skip
E ModuleNotFoundError: No module named 'examples.rag'
```
After, the same command passes.
The fix was to change
`from examples.rag.file_name -> from filename`
and add the same `sys.path` magic that `examples/seq2seq/` uses.
### TODO
- test the scripts in `examples/rag/README.md` after this change
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7712/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7712",
"html_url": "https://github.com/huggingface/transformers/pull/7712",
"diff_url": "https://github.com/huggingface/transformers/pull/7712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7712.patch",
"merged_at": 1602689700000
} |
https://api.github.com/repos/huggingface/transformers/issues/7711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7711/comments | https://api.github.com/repos/huggingface/transformers/issues/7711/events | https://github.com/huggingface/transformers/issues/7711 | 718,923,732 | MDU6SXNzdWU3MTg5MjM3MzI= | 7,711 | 2 Deberta test failures | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Duplicate of https://github.com/huggingface/transformers/issues/7565\r\n\r\nWorking on it in https://github.com/huggingface/transformers/pull/7645"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | I suspect these are related to recent tokenizer changes:
https://github.com/huggingface/transformers/runs/1236753957?check_suite_focus=true
```
FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_torch_encode_plus_sent_to_model
FAILED tests/test_modeling_deberta.py::DebertaModelIntegrationTest::test_inference_classification_head
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7711/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7710/comments | https://api.github.com/repos/huggingface/transformers/issues/7710/events | https://github.com/huggingface/transformers/issues/7710 | 718,923,593 | MDU6SXNzdWU3MTg5MjM1OTM= | 7,710 | 2 RAG test failures | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer - thanks a lot for the issue! \r\n\r\nThis error seems related to https://github.com/huggingface/transformers/issues/7690#issuecomment-707382445. "
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | The two failures here look related to recent tokenizer changes:
https://github.com/huggingface/transformers/runs/1236753957?check_suite_focus=true
```
=========================== short test summary info ============================
FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_sequence_generate_batch
FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_batch
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7710/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7709/comments | https://api.github.com/repos/huggingface/transformers/issues/7709/events | https://github.com/huggingface/transformers/pull/7709 | 718,869,523 | MDExOlB1bGxSZXF1ZXN0NTAxMTYzMDQy | 7,709 | [marian] Automate Tatoeba-Challenge conversion | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | This allows conversion of marian models from the Tatoeba-Challenge repo through the command line,
with instructions at `scripts/tatoeba/README.md`. This was previously impossible.
Tests are in `examples/` because the conversion requires examples dependencies (wget and pandas).
In my opinion, it would be a lot of work for very little benefit to remove these dependencies.
The goal of this PR is to allow @jorgtied to upload his own models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7709/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7709",
"html_url": "https://github.com/huggingface/transformers/pull/7709",
"diff_url": "https://github.com/huggingface/transformers/pull/7709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7709.patch",
"merged_at": 1602519865000
} |
https://api.github.com/repos/huggingface/transformers/issues/7708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7708/comments | https://api.github.com/repos/huggingface/transformers/issues/7708/events | https://github.com/huggingface/transformers/issues/7708 | 718,841,318 | MDU6SXNzdWU3MTg4NDEzMTg= | 7,708 | Fine Tuning SciBERT NLI model | {
"login": "duttaprat",
"id": 29531232,
"node_id": "MDQ6VXNlcjI5NTMxMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/29531232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duttaprat",
"html_url": "https://github.com/duttaprat",
"followers_url": "https://api.github.com/users/duttaprat/followers",
"following_url": "https://api.github.com/users/duttaprat/following{/other_user}",
"gists_url": "https://api.github.com/users/duttaprat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duttaprat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duttaprat/subscriptions",
"organizations_url": "https://api.github.com/users/duttaprat/orgs",
"repos_url": "https://api.github.com/users/duttaprat/repos",
"events_url": "https://api.github.com/users/duttaprat/events{/privacy}",
"received_events_url": "https://api.github.com/users/duttaprat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Take a look at the official tutorial on fine-tuning a model on your own dataset here: https://huggingface.co/transformers/custom_datasets.html\r\n\r\nWhat's your dataset about? Is it text classification, question answering?\r\n\r\nBTW, please post any questions which are not bugs/new features you would like to see added on the [forum](https://discuss.huggingface.co/) rather than here. ",
"Thanks @NielsRogge for the reply. \r\n\r\nMy dataset contains two clinical sentences (S1, S2) and its corresponding relation ('Entailment'/'Contradiction'/'Neutral'). So it basically contains three columns. ",
"Ok so that's sentence pair classification. There's an example notebook on that [here](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb).",
"Oh that's really helpful. Thank you again @NielsRogge ."
] | 1,602 | 1,602 | 1,602 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Hi,
I am a novice in the domain of fine-tuning any Transformer models using my own dataset. I want to fine-tune the SciBERT NLI model(https://huggingface.co/gsarti/scibert-nli) using my dataset. The dataset formation is
S1 [SEP] S2 [SEP] Inference
I am not sure how to fine-tune my dataset on the SciBERT NLI model(https://huggingface.co/gsarti/scibert-nli).
I am sorry for asking a very simple question. But any suggestion or link helps me. Thanks in advance. -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
Hi,
I am a novice in the domain of fine-tuning any Transformer models using my own dataset. I want to fine-tune the SciBERT NLI model(https://huggingface.co/gsarti/scibert-nli) using my dataset. The dataset formation is
S1 [SEP] S2 [SEP] Inference
I am not sure how to fine-tune my dataset on the SciBERT NLI model(https://huggingface.co/gsarti/scibert-nli).
I am sorry for asking a very simple question. But any suggestion or link helps me. Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7708/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7707/comments | https://api.github.com/repos/huggingface/transformers/issues/7707/events | https://github.com/huggingface/transformers/issues/7707 | 718,821,548 | MDU6SXNzdWU3MTg4MjE1NDg= | 7,707 | [NEW MODEL] Multilingual document embeddings: cT-LASER | {
"login": "hypnopump",
"id": 16491423,
"node_id": "MDQ6VXNlcjE2NDkxNDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16491423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hypnopump",
"html_url": "https://github.com/hypnopump",
"followers_url": "https://api.github.com/users/hypnopump/followers",
"following_url": "https://api.github.com/users/hypnopump/following{/other_user}",
"gists_url": "https://api.github.com/users/hypnopump/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hypnopump/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hypnopump/subscriptions",
"organizations_url": "https://api.github.com/users/hypnopump/orgs",
"repos_url": "https://api.github.com/users/hypnopump/repos",
"events_url": "https://api.github.com/users/hypnopump/events{/privacy}",
"received_events_url": "https://api.github.com/users/hypnopump/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | # 🌟 New model addition
## Model description
Multilingual document embeddings by adapting the LASER architecture (which is based on BiLSTM) to transformer architectures.
## Open source status
* [x] the model implementation is available: open source implementation is here: https://github.com/ever4244/tfm_laser_0520
* [ ] the model weights are available: i haven't found them
* [x] who are the authors: @ever4244
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7707/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7706/comments | https://api.github.com/repos/huggingface/transformers/issues/7706/events | https://github.com/huggingface/transformers/issues/7706 | 718,821,080 | MDU6SXNzdWU3MTg4MjEwODA= | 7,706 | run_tf_text_classification.py for custom dataset | {
"login": "zixiliuUSC",
"id": 49173327,
"node_id": "MDQ6VXNlcjQ5MTczMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zixiliuUSC",
"html_url": "https://github.com/zixiliuUSC",
"followers_url": "https://api.github.com/users/zixiliuUSC/followers",
"following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}",
"gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions",
"organizations_url": "https://api.github.com/users/zixiliuUSC/orgs",
"repos_url": "https://api.github.com/users/zixiliuUSC/repos",
"events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}",
"received_events_url": "https://api.github.com/users/zixiliuUSC/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jplu might be interested in that issue."
] | 1,602 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: ubuntu 18.04
- Python version: 3.6
- PyTorch version (GPU?): 2080ti
- Tensorflow version (GPU?): 2080ti
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
@stefan-it
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using: run_tf_text_classification.py
The tasks I am working on is: customize dataset for text classification
## To reproduce
create a csv file like below
```
exp.csv
id,sentence1,sentence2
0,you,he
0,you,he
0,you,he
0,you,he
0,you,he
0,you,he
0,you,he
0,you,he
0,you,he
```
run official script (though actually I split them in jupyter notebook cell, but the code is the same)
python run_tf_text_classification.py \
--train_file exp.csv \ ### training dataset file location (mandatory if running with --do_train option)
--dev_file exp.csv \ ### development dataset file location (mandatory if running with --do_eval option)
--test_file exp.csv \ ### test dataset file location (mandatory if running with --do_predict option)
--label_column_id 0 \ ### which column corresponds to the labels
--model_name_or_path allenai/longformer-base-4096\
--output_dir model \
--num_train_epochs 4 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 32 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 10 \
--evaluate_during_training \
--save_steps 10 \
--overwrite_output_dir \
--max_seq_length 128
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-43-1bd059d64d85> in <module>
5 tokenizer=tokenizer,
6 label_column_id=0,
----> 7 max_seq_length=200,
8 )
<ipython-input-42-7c289105c656> in get_tfds(train_file, eval_file, test_file, tokenizer, label_column_id, max_seq_length)
43 padding="max_length",
44 ),
---> 45 batched=True,
46 )
47 def gen_train():
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1254 fn_kwargs=fn_kwargs,
1255 new_fingerprint=new_fingerprint,
-> 1256 update_data=update_data,
1257 )
1258 else:
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
154 "output_all_columns": self._output_all_columns,
155 }
--> 156 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
157 if new_format["columns"] is not None:
158 new_format["columns"] = list(set(out.column_names) - unformatted_columns)
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1515 try:
1516 batch = apply_function_on_filtered_inputs(
-> 1517 batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
1518 )
1519 except NumExamplesMismatch:
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1433 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1434 processed_inputs = (
-> 1435 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1436 )
1437 if not update_data:
<ipython-input-42-7c289105c656> in <lambda>(example)
41 truncation=True,
42 max_length=max_seq_length,
---> 43 padding="max_length",
44 ),
45 batched=True,
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2211 return_length=return_length,
2212 verbose=verbose,
-> 2213 **kwargs,
2214 )
2215
~/anaconda3/envs/longtext-longformer/lib/python3.6/site-packages/transformers/tokenization_utils.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
558 ids, pair_ids = ids_or_pair_ids, None
559 else:
--> 560 ids, pair_ids = ids_or_pair_ids
561
562 first_ids = get_input_ids(ids)
ValueError: too many values to unpack (expected 2)
```
### suggestion for fix:
in line 60 - 69, my suggestion is as comment.
```
for k in files.keys():
transformed_ds[k] = ds[k].map(
lambda example: tokenizer.batch_encode_plus(
(example[features_name[0]], features_name[1]), # it should be (example[features_name[0]], example[features_name[1]])
truncation=True,
max_length=max_seq_length,
padding="max_length",
),
batched=True, # batched need to be set as True, I don't why batched=True doesn't work
)
```
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7706/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7705/comments | https://api.github.com/repos/huggingface/transformers/issues/7705/events | https://github.com/huggingface/transformers/issues/7705 | 718,795,477 | MDU6SXNzdWU3MTg3OTU0Nzc= | 7,705 | Recording training loss and perplexity during training | {
"login": "jasonyliang",
"id": 19767870,
"node_id": "MDQ6VXNlcjE5NzY3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/19767870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonyliang",
"html_url": "https://github.com/jasonyliang",
"followers_url": "https://api.github.com/users/jasonyliang/followers",
"following_url": "https://api.github.com/users/jasonyliang/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonyliang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonyliang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonyliang/subscriptions",
"organizations_url": "https://api.github.com/users/jasonyliang/orgs",
"repos_url": "https://api.github.com/users/jasonyliang/repos",
"events_url": "https://api.github.com/users/jasonyliang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonyliang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | I'm fine-tuning GPT-2 text generation with the following command on Colab:
python run_language_modeling.py \
--output_dir=$OUTPUT_DIR \
--model_type=gpt2 \
--model_name_or_path=$MODEL_NAME \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--per_gpu_train_batch_size=1 \
--save_steps=-1 \
--num_train_epochs=5
I was wondering if I can record the training loss, perplexity ... etc per epoch into a csv file or saved as a variable on Colab?
Thank you so much! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7705/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7704/comments | https://api.github.com/repos/huggingface/transformers/issues/7704/events | https://github.com/huggingface/transformers/issues/7704 | 718,782,009 | MDU6SXNzdWU3MTg3ODIwMDk= | 7,704 | SQuAD example docs inaccurately suggest settings for bert-large-uncased on a single V100 | {
"login": "nelson-liu",
"id": 7272031,
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nelson-liu",
"html_url": "https://github.com/nelson-liu",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | Hi!
In the SQuAD example docs, it's noted that ( https://github.com/huggingface/transformers/blob/3f42eb979f7bd20448ff6b15ab316d63f5489a6f/docs/source/examples.md#fine-tuning-bert-on-squad10 ) `This example code fine-tunes BERT on the SQuAD1.0 dataset. It runs in 24 min (with BERT-base) or 68 min (with BERT-large) on a single tesla V100 16GB.`
Could just be me, but I don't think the example as provided works with `s/bert-base-uncased/bert-large-uncased/`---even with FP16, you run into a GPU OOM.
It might be useful to revisit this recommendation and/or remove it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7704/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7703/comments | https://api.github.com/repos/huggingface/transformers/issues/7703/events | https://github.com/huggingface/transformers/pull/7703 | 718,770,485 | MDExOlB1bGxSZXF1ZXN0NTAxMDg4NDA1 | 7,703 | Corrected typo: maked → masked | {
"login": "miguelvictor",
"id": 6831138,
"node_id": "MDQ6VXNlcjY4MzExMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6831138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miguelvictor",
"html_url": "https://github.com/miguelvictor",
"followers_url": "https://api.github.com/users/miguelvictor/followers",
"following_url": "https://api.github.com/users/miguelvictor/following{/other_user}",
"gists_url": "https://api.github.com/users/miguelvictor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miguelvictor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miguelvictor/subscriptions",
"organizations_url": "https://api.github.com/users/miguelvictor/orgs",
"repos_url": "https://api.github.com/users/miguelvictor/repos",
"events_url": "https://api.github.com/users/miguelvictor/events{/privacy}",
"received_events_url": "https://api.github.com/users/miguelvictor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ugh and I copied that docstrings to all other models... Thanks for fixing!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
Fixed small typo in the BERT documentation.
## Before submitting
✅ This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@LysandreJik @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7703/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7703",
"html_url": "https://github.com/huggingface/transformers/pull/7703",
"diff_url": "https://github.com/huggingface/transformers/pull/7703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7703.patch",
"merged_at": 1602449101000
} |
https://api.github.com/repos/huggingface/transformers/issues/7702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7702/comments | https://api.github.com/repos/huggingface/transformers/issues/7702/events | https://github.com/huggingface/transformers/issues/7702 | 718,743,604 | MDU6SXNzdWU3MTg3NDM2MDQ= | 7,702 | Trainer callback breaks old code | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Or add this check in `on_evaluate`\r\n```python\r\nif self.prediction_bar is not None:\r\n self.prediction_bar.close()\r\n```\r\n",
"I think the second solution is better (since it should be there in any case). We can add the events you suggest in the first one if we find other use cases that need them.\r\n\r\nDo you want to tackle this in a PR?",
"I was thinking we can move the opening and closing of `prediction_bar` into separate events, so we don't need the `if` statements. \r\nBut it can be hard to debug if someone miss the opening, so it's probably unnecessary.\r\n\r\nI will create a PR with the second solution."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0/src/transformers/trainer_callback.py#L438-L447
Currently it depends on the fact that evaluate() will first call `self.prediction_loop` https://github.com/huggingface/transformers/blob/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0/src/transformers/trainer.py#L1181
which will then call `.callback_handler.on_prediction_step `
https://github.com/huggingface/transformers/blob/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0/src/transformers/trainer.py#L1270
But in my old code (3.1.0), I subclass Trainer and overwrite evaluate(), without calling self.prediction_loop.
And results in this error:
```
self.prediction_bar.close()
AttributeError: 'NoneType' object has no attribute 'close'
```
I propose we add `on_predict_begin` and `on_predict_end`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7701/comments | https://api.github.com/repos/huggingface/transformers/issues/7701/events | https://github.com/huggingface/transformers/issues/7701 | 718,738,856 | MDU6SXNzdWU3MTg3Mzg4NTY= | 7,701 | Strange error while using the `LongformerForMultipleChoice` | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The way `xxxForMultipleChoice` models work is actually a bit tricky. It works as follows (based on the [original explanation by the author of BERT](https://github.com/google-research/bert/issues/38)):\r\n\r\nGiven a question, and several options, the question + options are processed by the model independently. So they will look as follows: `[CLS] question [SEP] option 1 [SEP]` first, then `[CLS] question [SEP] option 2 [SEP]` second, and so on. \r\n\r\nSo when you're using the tokenizer to encode the input, it should be used as follows:\r\n```\r\n# my multiple choice question has 4 options.\r\nquestion = \"this is a question\"\r\noption1 = \"option 1\"\r\noption2 = \"option 2\"\r\noption3 = \"option 3\"\r\noption4 = \"option 4\"\r\n\r\nencoded_input = longformer_tokenizer([question, question, question, question], \r\n [option1, option2, option3, option4], \r\n return_tensors='pt', \r\n padding='max_length')\r\n```\r\nwe need to `unsqueeze` the values of that dictionary, so to make sure they are all of shape (batch_size, num_choices, seq_len), or thus (1, 4, 4096). Also, the answer should be a tensor having shape (batch_size,), so in our case this is just a tensor containing a single element, containing the index of the correct option. Suppose the correct option is 3, then `answer` will be `torch.tensor([2])` (since indexing starts at zero). Next, we can run the forward pass as follows:\r\n\r\n```\r\nmc_labels = torch.tensor([2])\r\noutputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=mc_labels,\r\n return_dict=True) # batch size is 1\r\n```\r\n\r\nThe `outputs` will be a `MultipleChoiceOutput` containing the loss and the logits. The logits are of shape (batch_size, number of choices), so (1, 4). \r\n\r\nIf you want to train the model, simply get the loss using `outputs.loss` and perform `loss.backward()`. If you want to get the predictions of the model, convert the logits into predictions by typing `outputs.logits.argmax(-1)`. \r\n\r\nUPDATE: fixed the fact that the answer shouldn't be unsqueezed, since it's just a tensor of shape (batch_size,). \r\nUPDATE: fix indexing of answer.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,612 | 1,608 | NONE | null | Hello,
I am trying to use `LongformerForMultipleChoice` model, and the code I am using is the following:
```python
# import the pre-trained HuggingFace Longformer tokenizer.
longformer_tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
# get the pre-trained HuggingFace Longformer
best_model_longformer = LongformerForMultipleChoice.from_pretrained('allenai/longformer-base-4096',
output_hidden_states = True)
# my multiple choice question has 4 options.
question_list = [main_question, main_question, main_question, main_question]
options_list = [option1, option2, option3, option4]
mc_labels = torch.tensor([my_answer])
encoded_dict = longformer_tokenizer(question_list, options_list,
return_tensors = 'pt',
add_prefix_space = True,
padding = True)
input_hidden_state = best_model_longformer(
**{k: v.unsqueeze(0) for k,v in encoded_dict.items()},
labels = mc_labels)[2][0][:,:,:].detach()
```
and I am getting the error below:
```
/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py:71: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
sep_token_indices = (input_ids == sep_token_id).nonzero()
Traceback (most recent call last):
File "SEED_125_V20_15_LONGFORMER.py", line 427, in <module>
main_function('/home/ec2-user/G1G2.txt','/home/ec2-user/G1G2_answer_num.txt', num_iter)
File "SEED_125_V20_15_LONGFORMER.py", line 389, in main_function
best_model_longformer)
File "SEED_125_V20_15_LONGFORMER.py", line 198, in fill_MC_loss_accuracy_tensor
input_hidden_state = best_model_longformer(**{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels)[2][0][:,:,:].detach()
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 1808, in forward
loss = loss_fct(reshaped_logits, labels)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 948, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2422, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2218, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 1 is out of bounds.
```
How can I fix this error?
I also tried solving this issue with:
```python
# group question_list and options_list as a single list rather than specifying them seperately
encoded_dict = longformer_tokenizer([question_list, options_list],
return_tensors = 'pt',
add_prefix_space = True,
padding = True)
```
But this generates a different error, saying:
```
ValueError: 2 expected but found 1
```
PS: I don't think my Longformer model is correctly getting that my multiple-choice questions have 4 options...is there any way to make Longformer to take my mutiple-choice questions with the ones with 4 options (instead of 2 options)?
Thank you.
PS: I am interested in extracting hidden embedding more than the error or the logit themselves | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7701/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7700/comments | https://api.github.com/repos/huggingface/transformers/issues/7700/events | https://github.com/huggingface/transformers/issues/7700 | 718,711,679 | MDU6SXNzdWU3MTg3MTE2Nzk= | 7,700 | GPT2DoubleHeadsModel documentation example question (error in documentation)? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | Hello,
I was reading the documentation for the GPT2DoubleHeadsModel, and I have a question.
In the documentation, the example shown is:
```python
import torch
from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2DoubleHeadsModel.from_pretrained('gpt2, return_dict=True)
# Add a [CLS] to the vocabulary (we should train it also!)
num_added_tokens = tokenizer.add_special_tokens({'cls_token': '[CLS]'})
embedding_layer = model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size
choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
encoded_choices = [tokenizer.encode(s) for s in choices]
cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]
input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2
mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_logits = outputs.lm_logits
mc_logits = outputs.mc_logits
```
Now, I don't see any main question statement in this example, although I see two multiple-choice options. The way I used to use `GPT2DoubleHeadsModel` is I first did `tokenizer(question_statement, option_statement)` and do `encoded_dict['input_ids']` to extract `input_ids` and similarly `encoded_dict['token_type_ids']` to extract the `token_type_ids`. Has this changed? I am getting an impression that the example is wrong (maybe the example could apply for BERT, but not GPT2DoubleHeadsModel). Is this an error in the documentation? I thought that, since GPT-2 does causal language modeling, question statement and option statement has to be encoded together, and put the `[CLS]` token at the end (usually), so that GPT-2 can apply the causal language modeling process to solve the multiple-choice problem.
Reading this again, I think the examples for BERTForMultipleChoice and GPT2DoubleHeadsModel are flipped.
Thanks, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7700/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7699/comments | https://api.github.com/repos/huggingface/transformers/issues/7699/events | https://github.com/huggingface/transformers/pull/7699 | 718,688,925 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NDQz | 7,699 | Fix check for xla in PreTrainedModel.save_pretrained() | {
"login": "fteufel",
"id": 56223326,
"node_id": "MDQ6VXNlcjU2MjIzMzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/56223326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fteufel",
"html_url": "https://github.com/fteufel",
"followers_url": "https://api.github.com/users/fteufel/followers",
"following_url": "https://api.github.com/users/fteufel/following{/other_user}",
"gists_url": "https://api.github.com/users/fteufel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fteufel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fteufel/subscriptions",
"organizations_url": "https://api.github.com/users/fteufel/orgs",
"repos_url": "https://api.github.com/users/fteufel/repos",
"events_url": "https://api.github.com/users/fteufel/events{/privacy}",
"received_events_url": "https://api.github.com/users/fteufel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null |
# What does this PR do?
Added is_torch_tpu_available() to the condition for saving a model as xla model when calling `PreTrainedModel.save_pretrained()`
The `xla_device` property of `config` can also be `True` on a non-xla device, when loading a checkpoint that was previously trained and saved on xla.
Loading a model that was trained on xla was fixed previously with #5636 , this PR fixes the problem of saving such a model again.
Fixes #7695
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7699/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7699",
"html_url": "https://github.com/huggingface/transformers/pull/7699",
"diff_url": "https://github.com/huggingface/transformers/pull/7699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7699.patch",
"merged_at": 1602490218000
} |
https://api.github.com/repos/huggingface/transformers/issues/7698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7698/comments | https://api.github.com/repos/huggingface/transformers/issues/7698/events | https://github.com/huggingface/transformers/issues/7698 | 718,685,452 | MDU6SXNzdWU3MTg2ODU0NTI= | 7,698 | MLflow Trainer Callback | {
"login": "noise-field",
"id": 14188757,
"node_id": "MDQ6VXNlcjE0MTg4NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/14188757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noise-field",
"html_url": "https://github.com/noise-field",
"followers_url": "https://api.github.com/users/noise-field/followers",
"following_url": "https://api.github.com/users/noise-field/following{/other_user}",
"gists_url": "https://api.github.com/users/noise-field/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noise-field/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noise-field/subscriptions",
"organizations_url": "https://api.github.com/users/noise-field/orgs",
"repos_url": "https://api.github.com/users/noise-field/repos",
"events_url": "https://api.github.com/users/noise-field/events{/privacy}",
"received_events_url": "https://api.github.com/users/noise-field/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Happy to get a PR on this!",
"@noise-field is remote tracking with remote server uri with authentication also enabled as part of this feature request ?",
"@RahulKulhari well, this is not part of the feature request, but you can certainly do remote tracking with mlflow callback. However, you will need to use environment variables (MLFLOW_TRACKING_URI , MLFLOW_TRACKING_USERNAME, MLFLOW_TRACKING_PASSWORD) in advance to configure your connection to the remote server."
] | 1,602 | 1,614 | 1,603 | CONTRIBUTOR | null | # 🚀 Feature request
A callback to log hyperparameters, metrics and cofings/weights to MLFlow, like the existing wandb and Tensorboard callbacks.
## Motivation
I use MLFlow as my primary experiment tracking tool. It is convenient to run on a remote server and log the results from any of your training machines, andit also facilitates collaboration.
Trainer is an amazing tool, it makes it very simple to train models, however, the only way to modify the training loop to include custom logging seems to add a callback.
## Your contribution
I can contribute a PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7698/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7698/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7697/comments | https://api.github.com/repos/huggingface/transformers/issues/7697/events | https://github.com/huggingface/transformers/issues/7697 | 718,654,097 | MDU6SXNzdWU3MTg2NTQwOTc= | 7,697 | tokenizers dependency warning: `transformers 3.3.1 has requirement tokenizers==0.8.1.rc2, but you'll have tokenizers 0.9.0` | {
"login": "morganmcg1",
"id": 20516801,
"node_id": "MDQ6VXNlcjIwNTE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morganmcg1",
"html_url": "https://github.com/morganmcg1",
"followers_url": "https://api.github.com/users/morganmcg1/followers",
"following_url": "https://api.github.com/users/morganmcg1/following{/other_user}",
"gists_url": "https://api.github.com/users/morganmcg1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morganmcg1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganmcg1/subscriptions",
"organizations_url": "https://api.github.com/users/morganmcg1/orgs",
"repos_url": "https://api.github.com/users/morganmcg1/repos",
"events_url": "https://api.github.com/users/morganmcg1/events{/privacy}",
"received_events_url": "https://api.github.com/users/morganmcg1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also stumbled upon this package. I was surprised we were forced to add the RC version of tokenizers. I would expect this to be pinned to \"0.8.1\" or \"0.9.1\"",
"https://github.com/huggingface/transformers/pull/7794 to update to release version 0.9.1",
"Hi, we have a strict requirement on `tokenizers==0.8.1rc2`. We're updating it in https://github.com/huggingface/transformers/pull/7659 but the current `transformers` `master` branch will stay pinned until that PR is merged.\r\n\r\nBoth libraries evolve quickly and generally evolve together, so having a strict `==` dependency is necessary until tokenizers version 1.0.0 is released."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
## Information
Hey all, it'll probably be fixed soon but when updating to `tokenziers` 0.9.0 with `pip install tokenizers --upgrade` I get:
`ERROR: transformers 3.3.1 has requirement tokenizers==0.8.1.rc2, but you'll have tokenizers 0.9.0 which is incompatible.`
## To reproduce
Steps to reproduce the behavior:
1. pip Install transformers --upgrade
2. pip install tokenizers --upgrade
## Expected behavior
Expected compatibility between transformers 3.3.1 and tokenizers 0.9.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7697/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7696/comments | https://api.github.com/repos/huggingface/transformers/issues/7696/events | https://github.com/huggingface/transformers/pull/7696 | 718,610,488 | MDExOlB1bGxSZXF1ZXN0NTAwOTY5NDMz | 7,696 | Minor spelling corrections in docstrings. "information" is uncountable in English and has no plural. | {
"login": "AndreaSottana",
"id": 48888970,
"node_id": "MDQ6VXNlcjQ4ODg4OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/48888970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaSottana",
"html_url": "https://github.com/AndreaSottana",
"followers_url": "https://api.github.com/users/AndreaSottana/followers",
"following_url": "https://api.github.com/users/AndreaSottana/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaSottana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaSottana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaSottana/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaSottana/orgs",
"repos_url": "https://api.github.com/users/AndreaSottana/repos",
"events_url": "https://api.github.com/users/AndreaSottana/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaSottana/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Minor spelling corrections in docstrings. "information" is uncountable in English and has no plural. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7696",
"html_url": "https://github.com/huggingface/transformers/pull/7696",
"diff_url": "https://github.com/huggingface/transformers/pull/7696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7696.patch",
"merged_at": 1602497361000
} |
https://api.github.com/repos/huggingface/transformers/issues/7695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7695/comments | https://api.github.com/repos/huggingface/transformers/issues/7695/events | https://github.com/huggingface/transformers/issues/7695 | 718,610,102 | MDU6SXNzdWU3MTg2MTAxMDI= | 7,695 | save_pretrained() does not check if xla is available | {
"login": "fteufel",
"id": 56223326,
"node_id": "MDQ6VXNlcjU2MjIzMzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/56223326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fteufel",
"html_url": "https://github.com/fteufel",
"followers_url": "https://api.github.com/users/fteufel/followers",
"following_url": "https://api.github.com/users/fteufel/following{/other_user}",
"gists_url": "https://api.github.com/users/fteufel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fteufel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fteufel/subscriptions",
"organizations_url": "https://api.github.com/users/fteufel/orgs",
"repos_url": "https://api.github.com/users/fteufel/repos",
"events_url": "https://api.github.com/users/fteufel/events{/privacy}",
"received_events_url": "https://api.github.com/users/fteufel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. load any model trained on TPU with `BertModel.from_pretrained(tpu_checkpoint_path)`
2. run/train the model - works fine
3. save the model with `model.save_pretrained(save_path)`
```
line 720, in save_pretrained
import torch_xla.core.xla_model as xm
ModuleNotFoundError: No module named 'torch_xla'
```
## Expected behavior
I am pretraining a LM on TPU, and for the downstream task fine-tuning I load the saved checkpoints on a non-TPU device.
Loading works fine now (#5636), but saving again does not.
`save_pretrained` should check whether the device is still xla - the original config attribute that is used in `save_pretrained` to check for the device persists when loading the xla model on another device:
```
getattr(config, 'xla_device')
True
```
It is easy to fix by changing the config attribute `setattr(config, 'xla_device', False)` in the script, but I would still consider it a bug.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7695/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7694/comments | https://api.github.com/repos/huggingface/transformers/issues/7694/events | https://github.com/huggingface/transformers/pull/7694 | 718,609,678 | MDExOlB1bGxSZXF1ZXN0NTAwOTY4Nzkx | 7,694 | Fix docstring in AutoModel class | {
"login": "av-maslov",
"id": 71869629,
"node_id": "MDQ6VXNlcjcxODY5NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/71869629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/av-maslov",
"html_url": "https://github.com/av-maslov",
"followers_url": "https://api.github.com/users/av-maslov/followers",
"following_url": "https://api.github.com/users/av-maslov/following{/other_user}",
"gists_url": "https://api.github.com/users/av-maslov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/av-maslov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/av-maslov/subscriptions",
"organizations_url": "https://api.github.com/users/av-maslov/orgs",
"repos_url": "https://api.github.com/users/av-maslov/repos",
"events_url": "https://api.github.com/users/av-maslov/events{/privacy}",
"received_events_url": "https://api.github.com/users/av-maslov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes doc string for the class AutoModel
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7694",
"html_url": "https://github.com/huggingface/transformers/pull/7694",
"diff_url": "https://github.com/huggingface/transformers/pull/7694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7694.patch",
"merged_at": 1602378489000
} |
https://api.github.com/repos/huggingface/transformers/issues/7693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7693/comments | https://api.github.com/repos/huggingface/transformers/issues/7693/events | https://github.com/huggingface/transformers/issues/7693 | 718,582,144 | MDU6SXNzdWU3MTg1ODIxNDQ= | 7,693 | How to get the word embedding after pre-training ? for example, a embedding matrix | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It depends of what you understand as \"embedding\", as it can be ambiguous with transformer models. \r\n\r\nEmbeddings can be the embedding matrix, which returns context-less embedding of tokens, that you can obtain with `model.get_input_embeddings()`.\r\n\r\nEmbeddings can also be understood as the features generated by the base model, which are token embeddings with context (depends on the tokens surrounding the token you're studying). You can simply do a forward pass through the base models (e.g., `BertModel`, `GPT2Model`, etc.) to get these embeddings.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | I am excited on this great model. And I want to get the word embedding . Where shold I find the file from output or should I change to code to do this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7692/comments | https://api.github.com/repos/huggingface/transformers/issues/7692/events | https://github.com/huggingface/transformers/issues/7692 | 718,558,665 | MDU6SXNzdWU3MTg1NTg2NjU= | 7,692 | Fail to run text classification example with run_tf_text_classification | {
"login": "lkluo",
"id": 26020832,
"node_id": "MDQ6VXNlcjI2MDIwODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/26020832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkluo",
"html_url": "https://github.com/lkluo",
"followers_url": "https://api.github.com/users/lkluo/followers",
"following_url": "https://api.github.com/users/lkluo/following{/other_user}",
"gists_url": "https://api.github.com/users/lkluo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkluo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkluo/subscriptions",
"organizations_url": "https://api.github.com/users/lkluo/orgs",
"repos_url": "https://api.github.com/users/lkluo/repos",
"events_url": "https://api.github.com/users/lkluo/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkluo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Might be of interest to @jplu ",
"Hello!\r\n\r\nCan you give more detail on how to reproduce your issue, otherwise we cannot help you.",
"> Hello!\r\n> \r\n> Can you give more detail on how to reproduce your issue, otherwise we cannot help you.\r\n\r\nThanks for your replay. I followed the instruction [here](https://github.com/huggingface/transformers/blob/master/examples/text-classification/README.md) with my own datasets (I can not provide my dataset due to confidentiality.). My script is below:\r\n\r\n> python3 run_tf_text_classification.py \\\r\n--train_file $data_dir/train.csv \\\r\n--dev_file $data_dir/dev.csv \\\r\n--test_file $data_dir/test.csv \\\r\n--label_column_id 0 \\\r\n--model_name_or_path distilbert-base-uncased \\\r\n--cache_dir $cache_dir \\\r\n--output_dir $output_dir \\\r\n--num_train_epochs 4 \\\r\n--per_device_train_batch_size 12 \\\r\n--per_device_eval_batch_size 12 \\\r\n--do_train \\\r\n--do_eval \\\r\n--do_predict \\\r\n--logging_steps 10 \\\r\n--evaluate_during_training \\\r\n--save_steps 10 \\\r\n--overwrite_output_dir \\\r\n--max_seq_length 128\r\n\r\nI can see the training started and run for a while, as there were checkpoints saved in the `output_dir`.",
"Sorry, I tried with one of my dataset with the exact same command line on 4/2/1 GPUs and CPU and cannot reproduce your error. The only one thing I can tell you to do is to be sure to use the master version of the script. Otherwise without more information I cannot really help you more sorry :(",
"> Sorry, I tried with one of my dataset with the exact same command line on 4/2/1 GPUs and CPU and cannot reproduce your error. The only one thing I can tell you to do is to be sure to use the master version of the script. Otherwise without more information I cannot really help you more sorry :(\r\n\r\nThanks for your help. It could be related to tensorflow or transformers versions. I will try a few of them and see how it would solve my problem.",
"@lkluo I have the exact same problem. Did you solve this?",
"> @lkluo I have the exact same problem. Did you solve this?\r\n\r\nI have tried many ways without any luck, so I gave up.\r\nYou may open a new issue and seek help from @jplu.",
"@lkluo Thanks for your response. I think I got it fixed. I was saving the model after each epoch with `tf.saved_model.save(self.model, self.args.output_dir)`. However, when using the model for evaluation after saving it once with this method, I got the error you described. I changed it to using `self.model.ckpt_manager.save()` which is a bit inconvenient since I want .pb files, but at least the code runs fine now. If your error is also related to storing the model, this might help you.\r\n",
"> @lkluo Thanks for your response. I think I got it fixed. I was saving the model after each epoch with `tf.saved_model.save(self.model, self.args.output_dir)`. However, when using the model for evaluation after saving it once with this method, I got the error you described. I changed it to using `self.model.ckpt_manager.save()` which is a bit inconvenient since I want .pb files, but at least the code runs fine now. If your error is also related to storing the model, this might help you.\r\n\r\nGood to know, thanks for letting me know. I will definitely give a try with your method.",
"## Environment info\r\nPlatform: Jupyter notebook on Ubuntu 2004\r\nTF version: 2.3.1\r\nTransformers version: 3.5.0\r\nPython version: 3.6.9\r\nSingle GPU: RTX2080TI\r\n\r\n## Issue\r\nI am encountering the same error during evaluation using TFTrainer.train(). It is not reproducible, and it seems to happen randomly. I installed the latest docker image from the Tensorflow website (docker pull tensorflow/tensorflow:latest-gpu-jupyter). It seems everyone is downgrading Tensorflow to avoid this issue? What is the lowest possible version of Tensorflow for using Transformers 3.5.0?",
"I met the same problem. It caused by TFTrainer.train() when excute to line 573 'self.distributed_training_steps(batch)' in trainer_tf.py. And it throws \r\n```\r\n2020-11-21 19:34:31.165454: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 9713 of 16392\r\n2020-11-21 19:34:38.101580: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n```\r\nI tried with colab gpu, it is not work. And I searched the same issue \"Shuffle buffer filled.\" in tensorflow, it is still not solved.",
"I reopen this issue because many others encountered the same problem.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,602 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Text classification with own dataset
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I use my own datasets {train/dev/test}/.csv and run `run_tf_text_classification.py`, the training seems OK, while error occurs while evaluation as below:
> 2020-10-10 07:33:15.368292: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at resource_variable_ops.cc:537 : Not found: Resource localhost/_AnonymousVar110/N10tensorflow3VarE does not exist.
Traceback (most recent call last):
File "run_tf_text_classification.py", line 292, in <module>
main()
File "run_tf_text_classification.py", line 267, in main
trainer.train()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 592, in train
self.evaluate()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 438, in evaluate
output = self.prediction_loop(eval_ds, steps, num_examples, description="Evaluation")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 327, in prediction_loop
logits = self.distributed_prediction_steps(batch)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 814, in _call
results = self._stateful_fn(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2829, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
cancellation_manager=cancellation_manager)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.NotFoundError: Resource localhost/_AnonymousVar110/N10tensorflow3VarE does not exist.
[[node AssignAddVariableOp (defined at /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:457) ]] [Op:__inference_distributed_prediction_steps_11885]
Function call stack:
distributed_prediction_steps
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7692/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7691/comments | https://api.github.com/repos/huggingface/transformers/issues/7691/events | https://github.com/huggingface/transformers/issues/7691 | 718,511,548 | MDU6SXNzdWU3MTg1MTE1NDg= | 7,691 | Seq2Seq Example with Bart not Saving Best Model | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Using tiny was very smart.\r\nWe upgraded to pytorch_lightning 0.9.0 (`pip install -r examples/requirements.txt`), does that fix your issue?",
"It worked! But.... I had to update pyarrow from 0.14.1 to 0.17.1 because I was getting the following error:\r\n`AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'`\r\n\r\nWhich I am guessing is due to y'all epic datasets library requiring pyarrow=>0.17.1:\r\n`ERROR: datasets 1.1.2 has requirement pyarrow>=0.17.1, but you'll have pyarrow 0.14.1 which is incompatible.`\r\n\r\nI opened a PR to add this dependency on pyarrow 0.17.1 to the `examples/requirements.txt`: https://github.com/huggingface/transformers/pull/7750#issue-501958657\r\nIf the PR can be accepted, I'd say this issue can be fully closed.\r\n\r\nThanks for your help with this!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sshleifer
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I am using a slightly modified version of the examples/seq2seq/finetune_bart_tiny.sh script, where I just add the `--val_check_interval 0.1 --do_predict` flags to the finetune.py call:
```
python finetune.py \
--data_dir=cnn_tiny/ \
--model_name_or_path=sshleifer/bart-tiny-random \
--learning_rate=3e-5 \
--train_batch_size=2 \
--eval_batch_size=2 \
--output_dir=$OUTPUT_DIR \
--num_train_epochs=1 \
--gpus=0 \
--val_check_interval 0.1 \
--do_train --do_predict "$@"
```
Which is supposed to save the best performing model based on the val_check_interval and then evaluate the model, as is done in the regular `finetune.sh` script (thought the error is also in this one as well, I am using the tiny version so that it is easier to see the issue).
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: tiny-cnn
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Go through this google colab: https://colab.research.google.com/drive/1xtyvXI6gNAJpSkqYi_0ieWkMFRw3OSm2?usp=sharing
```
._cnn_tiny
cnn_tiny/
cnn_tiny/._train.target
cnn_tiny/train.target
cnn_tiny/._train.source
cnn_tiny/train.source
cnn_tiny/._val.source
cnn_tiny/val.source
cnn_tiny/._val.target
cnn_tiny/val.target
cnn_tiny/._test.source
cnn_tiny/test.source
cnn_tiny/._test.target
cnn_tiny/test.target
Epoch 0: 17%|█▋ | 1/6 [00:00<00:02, 2.20it/s, loss=10.839, v_num=1]
Validating: 0it [00:00, ?it/s]
Epoch 0: 33%|███▎ | 2/6 [00:00<00:01, 2.02it/s, loss=10.839, v_num=1]
Epoch 0: 50%|█████ | 3/6 [00:01<00:01, 2.07it/s, loss=10.839, v_num=1]
Epoch 0: 67%|██████▋ | 4/6 [00:01<00:00, 2.33it/s, loss=10.837, v_num=1]
Validating: 0it [00:00, ?it/s]
Epoch 0: 83%|████████▎ | 5/6 [00:02<00:00, 2.24it/s, loss=10.837, v_num=1]
Epoch 0: 100%|██████████| 6/6 [00:02<00:00, 2.28it/s, loss=10.837, v_num=1]
Epoch 0: 100%|██████████| 6/6 [00:02<00:00, 2.28it/s, loss=10.837, v_num=1]
--2020-10-10 02:28:52-- https://cdn-datasets.huggingface.co/summarization/cnn_tiny.tgz
Resolving cdn-datasets.huggingface.co (cdn-datasets.huggingface.co)... 13.227.209.120, 13.227.209.109, 13.227.209.124, ...
Connecting to cdn-datasets.huggingface.co (cdn-datasets.huggingface.co)|13.227.209.120|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23131 (23K) [application/x-tar]
Saving to: ‘cnn_tiny.tgz’
0K .......... .......... .. 100% 44.4M=0s
2020-10-10 02:28:52 (44.4 MB/s) - ‘cnn_tiny.tgz’ saved [23131/23131]
2020-10-10 02:28:54.290821: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The {log:dict keyword} was deprecated in 0.9.1 and will be removed in 1.0.0
Please use self.log(...) inside the lightningModule instead.
# log on a step or aggregate epoch metric to the logger and/or progress bar
# (inside LightningModule)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "finetune.py", line 440, in <module>
main(args)
File "finetune.py", line 429, in main
trainer.test()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 728, in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 740, in __test_using_best_weights
'ckpt_path is "best", but ModelCheckpoint is not configured to save the best model.'
pytorch_lightning.utilities.exceptions.MisconfigurationException: ckpt_path is "best", but ModelCheckpoint is not configured to save the best model
```
## Expected behavior
The script should save the model with the best performing validation loss and should then use this saved model for evaluation against a test set. This is the same case for the regular `finetune.sh` script. This was working as of Oct 4/5th, but stopped sometime after.
Any help with this issue would be greatly appreciated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7691/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7690/comments | https://api.github.com/repos/huggingface/transformers/issues/7690/events | https://github.com/huggingface/transformers/issues/7690 | 718,473,483 | MDU6SXNzdWU3MTg0NzM0ODM= | 7,690 | RAG Tokenizer erroring out | {
"login": "dzorlu",
"id": 3424293,
"node_id": "MDQ6VXNlcjM0MjQyOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3424293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dzorlu",
"html_url": "https://github.com/dzorlu",
"followers_url": "https://api.github.com/users/dzorlu/followers",
"following_url": "https://api.github.com/users/dzorlu/following{/other_user}",
"gists_url": "https://api.github.com/users/dzorlu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dzorlu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dzorlu/subscriptions",
"organizations_url": "https://api.github.com/users/dzorlu/orgs",
"repos_url": "https://api.github.com/users/dzorlu/repos",
"events_url": "https://api.github.com/users/dzorlu/events{/privacy}",
"received_events_url": "https://api.github.com/users/dzorlu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Just to follow up on this, look like special tokens are loaded for the RAG generator [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1635), but it is not converted to `AddedTokens` [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1592) and hence not compatible with downstream operations. ",
"When I run the examples from: \r\nhttps://huggingface.co/transformers/model_doc/rag.html \r\n\r\nI get exactly the same error:\r\n\r\n",
"Hey @dzorlu - thanks for your error, I will take a look tomorrow!",
"> Hey @dzorlu - thanks for your error, I will take a look tomorrow!\r\n\r\nthanks @patrickvonplaten . Appreciate all the hard work :+1: ",
"Should be solved now - let me know if you still experience problems @dzorlu ",
"Thank you!"
] | 1,602 | 1,602 | 1,602 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-48-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
@ola13 @mfuntowicz
## Information
Hi- I am trying to get the RAG running, however I am getting the error when I follow the instructions here: <https://huggingface.co/facebook/rag-token-nq>
Particularly, the error message is as follows:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-35cd6a2213c0> in <module>
1 from transformers import AutoTokenizer, AutoModelWithLMHead
2
----> 3 tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq")
~/src/transformers/src/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
258 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
259 else:
--> 260 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
261
262 raise ValueError(
~/src/transformers/src/transformers/tokenization_rag.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
61 print(config.generator)
62 print("***")
---> 63 generator = AutoTokenizer.from_pretrained(generator_path, config=config.generator)
64 return cls(question_encoder=question_encoder, generator=generator)
65
~/src/transformers/src/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
258 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
259 else:
--> 260 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
261
262 raise ValueError(
~/src/transformers/src/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1557
1558 return cls._from_pretrained(
-> 1559 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
1560 )
1561
~/src/transformers/src/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1648
1649 # Add supplementary tokens.
-> 1650 special_tokens = tokenizer.all_special_tokens
1651 if added_tokens_file is not None:
1652 with open(added_tokens_file, encoding="utf-8") as added_tokens_handle:
~/src/transformers/src/transformers/tokenization_utils_base.py in all_special_tokens(self)
1026 Convert tokens of :obj:`tokenizers.AddedToken` type to string.
1027 """
-> 1028 all_toks = [str(s) for s in self.all_special_tokens_extended]
1029 return all_toks
1030
~/src/transformers/src/transformers/tokenization_utils_base.py in all_special_tokens_extended(self)
1046 logger.info(all_toks)
1047 print(all_toks)
-> 1048 all_toks = list(OrderedDict.fromkeys(all_toks))
1049 return all_toks
1050
TypeError: unhashable type: 'dict'
```
`all_toks` variable looks as follows. Obviously, it is a dictionary and `OrderedDict.fromkeys` doesn't like it.
```
[{'content': '<s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '</s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<unk>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '</s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<pad>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<mask>', 'single_word': False, 'lstrip': True, 'rstrip': False, 'normalized': True}]
```
I will be digging deeper, hoping that I am doing an obvious mistake.
## To reproduce
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq")
```
## Expected behavior
It should load the tokenizer!
Thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7690/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7690/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7689/comments | https://api.github.com/repos/huggingface/transformers/issues/7689/events | https://github.com/huggingface/transformers/pull/7689 | 718,472,843 | MDExOlB1bGxSZXF1ZXN0NTAwODYyOTMx | 7,689 | Fix flaky test in test_trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | COLLABORATOR | null | # What does this PR do?
As investigated with @LysandreJik today, the corresponding test was flaky because `loggin_dir` has a default that depends of time (so the test fails if the two Trainer are instantiated just before a new minute and just after). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7689/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7689",
"html_url": "https://github.com/huggingface/transformers/pull/7689",
"diff_url": "https://github.com/huggingface/transformers/pull/7689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7689.patch",
"merged_at": 1602288076000
} |
https://api.github.com/repos/huggingface/transformers/issues/7688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7688/comments | https://api.github.com/repos/huggingface/transformers/issues/7688/events | https://github.com/huggingface/transformers/pull/7688 | 718,471,403 | MDExOlB1bGxSZXF1ZXN0NTAwODYxNzI2 | 7,688 | Adds license information for default and distilbert models | {
"login": "ankane",
"id": 220358,
"node_id": "MDQ6VXNlcjIyMDM1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/220358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankane",
"html_url": "https://github.com/ankane",
"followers_url": "https://api.github.com/users/ankane/followers",
"following_url": "https://api.github.com/users/ankane/following{/other_user}",
"gists_url": "https://api.github.com/users/ankane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankane/subscriptions",
"organizations_url": "https://api.github.com/users/ankane/orgs",
"repos_url": "https://api.github.com/users/ankane/repos",
"events_url": "https://api.github.com/users/ankane/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankane/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks!",
"Thanks @julien-c!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
Adds license information for default and `distilbert*` models. A follow up to #7668.
- Apache 2.0 for `distilbert*` based on https://github.com/huggingface/transformers/issues/3357#issuecomment-614856396
- MIT for `facebook/bart-large-mnli` based on https://github.com/huggingface/transformers/issues/7668#issuecomment-706064737
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case. #7668
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Model Cards: @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7688/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7688",
"html_url": "https://github.com/huggingface/transformers/pull/7688",
"diff_url": "https://github.com/huggingface/transformers/pull/7688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7688.patch",
"merged_at": 1602316512000
} |
https://api.github.com/repos/huggingface/transformers/issues/7687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7687/comments | https://api.github.com/repos/huggingface/transformers/issues/7687/events | https://github.com/huggingface/transformers/pull/7687 | 718,469,809 | MDExOlB1bGxSZXF1ZXN0NTAwODYwMzc5 | 7,687 | Fix title level in Blenderbot doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | COLLABORATOR | null | # What does this PR do?
The navigation bar is a bit crazy because the documentation of BlenderBot puts title and sections at the same level. This PR fixes that.
@LysandreJik merging as soon as it's green because it's a small fix, tagging you so you're aware. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7687/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7687/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7687",
"html_url": "https://github.com/huggingface/transformers/pull/7687",
"diff_url": "https://github.com/huggingface/transformers/pull/7687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7687.patch",
"merged_at": 1602285851000
} |
https://api.github.com/repos/huggingface/transformers/issues/7686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7686/comments | https://api.github.com/repos/huggingface/transformers/issues/7686/events | https://github.com/huggingface/transformers/issues/7686 | 718,467,632 | MDU6SXNzdWU3MTg0Njc2MzI= | 7,686 | When downloading RAG dpr indexes, there is a pickle file loading error | {
"login": "zhangdongxu",
"id": 7161563,
"node_id": "MDQ6VXNlcjcxNjE1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7161563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangdongxu",
"html_url": "https://github.com/zhangdongxu",
"followers_url": "https://api.github.com/users/zhangdongxu/followers",
"following_url": "https://api.github.com/users/zhangdongxu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangdongxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangdongxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangdongxu/subscriptions",
"organizations_url": "https://api.github.com/users/zhangdongxu/orgs",
"repos_url": "https://api.github.com/users/zhangdongxu/repos",
"events_url": "https://api.github.com/users/zhangdongxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangdongxu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It was probably because I repeatly downloaded the indexes and the previous incomplete files still exist. Issue closed",
"I met the same bug."
] | 1,602 | 1,615 | 1,602 | NONE | null | When I try to finetune RAG with following code:
```
self.config = RagConfig.from_pretrained("facebook/rag-token-base", n_docs=2, use_dummy_dataset=False)
self.tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
self.tokenizer.pad_token_id = AutoTokenizer.from_pretrained("facebook/bart-large").pad_token_id
retriever = RagRetriever.from_pretrained("facebook/rag-token-base" use_dummy_dataset=False)
self.model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=retriever, config=self.config)
```
An error occurs, which seems that a downloaded pickle file cannot be loaded. I put the error message below.
I assigned 200GB memory so it should not be an memory issue. I'm not sure if this is a trivial error due to my own implementation bugs or it is more common. Thank you very much!!
--------------
File "anaconda3/envs/transformers/lib/python3.7/site-packages/numpy/lib/npyio.py", line 447, in load return pickle.load(fid, **pickle_kwargs)
_pickle.UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 553, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 841, in _prepare_split
generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
File "anaconda3/envs/transformers/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File ".cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py", line 132, in _generate_examples
vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
File "anaconda3/envs/transformers/lib/python3.7/site-packages/numpy/lib/npyio.py", line 450, in load
"Failed to interpret file %s as a pickle" % repr(file))
OSError: Failed to interpret file <_io.BufferedReader name='.cache/huggingface/datasets/downloads/cd4183aaa482e0e3724cb8b2efafc6c762914aabed38c16a41f922ff7d5e90f9'> as a pickle
Traceback (most recent call last):
File "src/finetune.py", line 432, in <module>
main(args)
File "src/finetune.py", line 371, in main
model: SummarizationModule = SummarizationModule(args)
File "src/finetune.py", line 73, in __init__
super().__init__(hparams, num_labels=None, mode=self.mode, **kwargs)
File "src/lightning_base.py", line 130, in __init__
retriever = RagRetriever.from_pretrained("facebook/rag-token-base", use_dummy_dataset=False)
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 310, in from_pretrained
config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 301, in __init__
self.init_retrieval()
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 327, in init_retrieval
self.index.init_index()
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 241, in init_index
dummy=self.use_dummy_dataset,
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 555, in _download_and_prepare
raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
OSError: Cannot find data file.
srun: error: node006: task 0: Exited with exit code 1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7686/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7685/comments | https://api.github.com/repos/huggingface/transformers/issues/7685/events | https://github.com/huggingface/transformers/issues/7685 | 718,355,136 | MDU6SXNzdWU3MTgzNTUxMzY= | 7,685 | Using PaddingStrategy and TruncationStrategy throws an UnboundLocalError in tokenizers | {
"login": "dashayushman",
"id": 12120785,
"node_id": "MDQ6VXNlcjEyMTIwNzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/12120785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dashayushman",
"html_url": "https://github.com/dashayushman",
"followers_url": "https://api.github.com/users/dashayushman/followers",
"following_url": "https://api.github.com/users/dashayushman/following{/other_user}",
"gists_url": "https://api.github.com/users/dashayushman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dashayushman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dashayushman/subscriptions",
"organizations_url": "https://api.github.com/users/dashayushman/orgs",
"repos_url": "https://api.github.com/users/dashayushman/repos",
"events_url": "https://api.github.com/users/dashayushman/events{/privacy}",
"received_events_url": "https://api.github.com/users/dashayushman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! This should have been solved in `master`. Can you install from source and let us know if you're facing the same issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@mfuntowicz I am almost sure I am using it right. But after looking at the code I found that there are two variables that are being accessed before assignment. Thanks in advance.
## Information
This is exactly what I am doing:
I am trying to load a tokenizer using `AutoTokenizer` and encode a single string. I am using a pretrained tokenizer `distilbert-base-uncased`
## To reproduce
Steps to reproduce the behavior:
This is exactly what I am running:
```
from transformers import AutoTokenizer
from transformers.tokenization_utils_base import PaddingStrategy
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased", use_fast=True)
encoded_sentence = tokenizer.encode_plus(
"some input text",
return_attention_mask=True,
padding=PaddingStrategy.MAX_LENGTH,
add_special_tokens=True,
max_length=20,
return_token_type_ids=True
)
print(encoded_sentence)
```
This throws the following error:
```
Traceback (most recent call last):
File "......", line 22, in <module>
return_token_type_ids=True
File ".../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2029, in encode_plus
**kwargs,
File "..../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1837, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
UnboundLocalError: local variable 'padding_strategy' referenced before assignment
```
I have padded some of the paths to hide my local directories.
The same thing happens for truncation too. E.g.,
```
from transformers import AutoTokenizer
from transformers.tokenization_utils_base import TruncationStrategy
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased", use_fast=True)
encoded_sentence = tokenizer.encode_plus(
"some input text",
return_attention_mask=True,
add_special_tokens=True,
max_length=20,
return_token_type_ids=True,
truncation=TruncationStrategy.LONGEST_FIRST
)
print(encoded_sentence)
```
This raises the following error:
```
Traceback (most recent call last):
File ".../scratch.py", line 22, in <module>
truncation=TruncationStrategy.LONGEST_FIRST
File ".../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2029, in encode_plus
**kwargs,
File ".../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1846, in _get_padding_truncation_strategies
truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE
UnboundLocalError: local variable 'truncation_strategy' referenced before assignment
```
## Expected behavior
This should not throw an exception like this. I looked at the code as well. I kind of know what is the actual issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7685/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7684/comments | https://api.github.com/repos/huggingface/transformers/issues/7684/events | https://github.com/huggingface/transformers/issues/7684 | 718,282,206 | MDU6SXNzdWU3MTgyODIyMDY= | 7,684 | Error with running run_language_modeling.py on GCP TPU | {
"login": "aldazero",
"id": 21077281,
"node_id": "MDQ6VXNlcjIxMDc3Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/21077281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aldazero",
"html_url": "https://github.com/aldazero",
"followers_url": "https://api.github.com/users/aldazero/followers",
"following_url": "https://api.github.com/users/aldazero/following{/other_user}",
"gists_url": "https://api.github.com/users/aldazero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aldazero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aldazero/subscriptions",
"organizations_url": "https://api.github.com/users/aldazero/orgs",
"repos_url": "https://api.github.com/users/aldazero/repos",
"events_url": "https://api.github.com/users/aldazero/events{/privacy}",
"received_events_url": "https://api.github.com/users/aldazero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This doesn't seem to be an issue with `transformers` but with your TPU and its communication with your VM. You would probably have more help if you asked over at https://github.com/pytorch/xla",
"@LysandreJik Thanks for your comment. The problem rooted in the TPU software version. Setting it to `PyTorch-1.6` resolved the issue.\r\n"
] | 1,602 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: GCP
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes/NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ run_language_modeling.py ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [fine tuning BERT on Wikitext103 ] my own task or dataset: (give details below)
## To reproduce
I am trying to fine-tune Bert on the Wikitext-103 dataset by running the example code run_language_modeling.py on Google cloud TPU v3-8 using xla_spawn.py launcher. I tried both num_cores =1 and num_cores > 1, but neither worked properly.
Steps to reproduce the behavior:
1. python xla_spawn.py --num_cores 8 \
run_language_modeling.py \
--model_name_or_path=bert-base-uncased \
--do_train \
--train_data_file\
--do_eval \
--eval_data_file\
--mlm\
--per_device_train_batch_size=4
The full output is long and you can find it [here](https://gofile.io/d/tjovN1). Here is the begining of error message:
Iteration: 0%| | 1/28026 [00:00<3:41:44, 2.11it/s][A2020-10-09 16:05:45.623409: W
2449 tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:160] RPC failed with status = "Unavailable: Socket closed" and grpc_error_string = "{"created":"@1602259545.623281190","description":"Error received from peer ipv4:10.48.142.250:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2020-10-09 16:06:09.642115: E 2449 tensorflow/compiler/xla/xla_client/xla_util.cc:76] >>> Dumping Computation 0
2020-10-09 16:06:09.642213: E 2449 tensorflow/compiler/xla/xla_client/xla_util.cc:76] HloModule SyncTensorsGraph.23679, input_output_alias={ {0}: (39, {}, may-alias), {1}: (37, {}, may-alias),
......
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Running without error!
<!-- A clear and concise description of what you would expect to happen. -->
Thanks for any help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7684/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7683/comments | https://api.github.com/repos/huggingface/transformers/issues/7683/events | https://github.com/huggingface/transformers/pull/7683 | 718,256,978 | MDExOlB1bGxSZXF1ZXN0NTAwNjgzMDUx | 7,683 | Gpt1 for sequence classification | {
"login": "fmcurti",
"id": 7762516,
"node_id": "MDQ6VXNlcjc3NjI1MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7762516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmcurti",
"html_url": "https://github.com/fmcurti",
"followers_url": "https://api.github.com/users/fmcurti/followers",
"following_url": "https://api.github.com/users/fmcurti/following{/other_user}",
"gists_url": "https://api.github.com/users/fmcurti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fmcurti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fmcurti/subscriptions",
"organizations_url": "https://api.github.com/users/fmcurti/orgs",
"repos_url": "https://api.github.com/users/fmcurti/repos",
"events_url": "https://api.github.com/users/fmcurti/events{/privacy}",
"received_events_url": "https://api.github.com/users/fmcurti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is great, thanks for working on it! There's something that I had forgotten in the initial GPT-2 implementation, which was to add it to the auto-models. I did it in this PR: https://github.com/huggingface/transformers/pull/7630.\r\n\r\nCould you apply the fix to GPT-1 as well, before we merge?",
"> This is great, thanks for working on it! There's something that I had forgotten in the initial GPT-2 implementation, which was to add it to the auto-models. I did it in this PR: #7630.\r\n> \r\n> Could you apply the fix to GPT-1 as well, before we merge?\r\n\r\nDone! 😊",
"Thanks @fmcurti!"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
Adds sequence classification architecture for GPT-1,
Strongly based on modifications made in #7501
Fixes #7623 (issue) (Partially)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik Here is the new PR without that merge problem, let me know if there is anything that should be changed =)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7683/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7683",
"html_url": "https://github.com/huggingface/transformers/pull/7683",
"diff_url": "https://github.com/huggingface/transformers/pull/7683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7683.patch",
"merged_at": 1602579976000
} |
https://api.github.com/repos/huggingface/transformers/issues/7682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7682/comments | https://api.github.com/repos/huggingface/transformers/issues/7682/events | https://github.com/huggingface/transformers/issues/7682 | 718,242,983 | MDU6SXNzdWU3MTgyNDI5ODM= | 7,682 | Fine-tuning | {
"login": "shainaraza",
"id": 32249936,
"node_id": "MDQ6VXNlcjMyMjQ5OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/32249936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shainaraza",
"html_url": "https://github.com/shainaraza",
"followers_url": "https://api.github.com/users/shainaraza/followers",
"following_url": "https://api.github.com/users/shainaraza/following{/other_user}",
"gists_url": "https://api.github.com/users/shainaraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shainaraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shainaraza/subscriptions",
"organizations_url": "https://api.github.com/users/shainaraza/orgs",
"repos_url": "https://api.github.com/users/shainaraza/repos",
"events_url": "https://api.github.com/users/shainaraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/shainaraza/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Both pre-training and fine-tuning involve _training_ a model (i.e., updating the weights of the model using backpropagation).\r\n\r\n* In a first step, a model is (pre-)trained on a very large dataset (such as all English Wikipedia articles). BERT for example, is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). Masked language modeling is, for example, given the sentence \"I went to the [MASK] to buy a bread\", the model must predict the word \"bakery\" in this case. For sentence prediction, given 2 sentences A and B, the model must predict whether sentence B follows sentence A in the dataset, or is just a random sentence. Note that both MLM and NSP are self-supervised learning (we don't need to manually annotate the dataset, because we can just use Wikipedia and mask out some words or randomize the sentences). \r\n\r\nThe reason they call it pre-training is because it's training a model before you train it on another, second, dataset. \r\n\r\n* In a second step, a model can be fine-tuned on a task of interest, usually called the downstream task. This can be text classification for example, or named-entity recognition, question-answering, summarization,... Fine-tuning is also training a model, but we start with the model that was already (pre-)trained on the large dataset in the first step. The reason this is done is because the model has already learned a lot about language in general (just by predicting masked words and interpreting the order of sentences). So the weights of these model are already quite good, they contain some \"knowledge\". We can now just use this model, and train it further on our own (usually way smaller) dataset. Note that this is supervised learning (we need to collect a labelled dataset). In case the downstream task is text classification (for example determining whether a movie review is positive or negative), then we need to collect a dataset of movie reviews and label each individual review with either \"positive\" or \"negative\". \r\n\r\nThis picture by [Jay Allamar](http://jalammar.github.io/illustrated-bert/) illustrates this very well (note that in the figure below, the downstream task is also text classification):\r\n\r\n\r\n\r\nBTW, please post any questions which are not bugs/new features you would like to see added on the [forum](https://discuss.huggingface.co/) rather than here.\r\n\r\n",
"> Both pre-training and fine-tuning involve _training_ a model (i.e., updating the weights of the model using backpropagation).\r\n> \r\n> * In a first step, a model is (pre-)trained on a very large dataset (such as all English Wikipedia articles). BERT for example, is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). Masked language modeling is, for example, given the sentence \"I went to the [MASK] to buy a bread\", the model must predict the word \"bakery\" in this case. For sentence prediction, given 2 sentences A and B, the model must predict whether sentence B follows sentence A in the dataset, or is just a random sentence. Note that both MLM and NSP are self-supervised learning (we don't need to manually annotate the dataset, because we can just use Wikipedia and mask out some words or randomize the sentences).\r\n> \r\n> The reason they call it pre-training is because it's training a model before you train it on another, second, dataset.\r\n> \r\n> * In a second step, a model can be fine-tuned on a task of interest, usually called the downstream task. This can be text classification for example, or named-entity recognition, question-answering, summarization,... Fine-tuning is also training a model, but we start with the model that was already (pre-)trained on the large dataset in the first step. The reason this is done is because the model has already learned a lot about language in general (just by predicting masked words and interpreting the order of sentences). So the weights of these model are already quite good, they contain some \"knowledge\". We can now just use this model, and train it further on our own (usually way smaller) dataset. Note that this is supervised learning (we need to collect a labelled dataset). In case the downstream task is text classification (for example determining whether a movie review is positive or negative), then we need to collect a dataset of movie reviews and label each individual review with either \"positive\" or \"negative\".\r\n> \r\n> This picture by [Jay Allamar](http://jalammar.github.io/illustrated-bert/) illustrates this very well (note that in the figure below, the downstream task is also text classification):\r\n> \r\n> \r\n> \r\n> BTW, please post any questions which are not bugs/new features you would like to see added on the [forum](https://discuss.huggingface.co/) rather than here.\r\n\r\nthanks @NielsRogge for the splendid information",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | I read lot of articles on pre-training vs fine-tuning but I am still not able to get the meaning in the context of transformer models, I understand the pre-training is training the dataset from scratch , while (point of confusion) the fine-tuning is using the pre-trained model/data and add on the top of that with our own custom dataset?
Then it comes the downstream tasks, any fine line of distinctions among these three terms pre-training, finetuning and downstream tasks
please clarify | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7682/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7681/comments | https://api.github.com/repos/huggingface/transformers/issues/7681/events | https://github.com/huggingface/transformers/pull/7681 | 718,226,566 | MDExOlB1bGxSZXF1ZXN0NTAwNjU3Njk2 | 7,681 | Delete extra test file in repo root | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7681/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7681",
"html_url": "https://github.com/huggingface/transformers/pull/7681",
"diff_url": "https://github.com/huggingface/transformers/pull/7681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7681.patch",
"merged_at": 1602256596000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7680/comments | https://api.github.com/repos/huggingface/transformers/issues/7680/events | https://github.com/huggingface/transformers/pull/7680 | 718,218,139 | MDExOlB1bGxSZXF1ZXN0NTAwNjUwNTgy | 7,680 | Better links for models in README and doc index | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | COLLABORATOR | null | # What does this PR do?
This PR fixes the links to the docs for unreleased models and makes the automatic copy to the index.rst a little bit better (by using relative links so there is no jump in version).
It adds instructions in the setup.py to clean the master in the links for unreleased models at the time of a release, we will just need to remember to have the right links in the README in a PR that adds a new model.
<!-- Remove if not applicable -->
Fixes #7657 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7680/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7680",
"html_url": "https://github.com/huggingface/transformers/pull/7680",
"diff_url": "https://github.com/huggingface/transformers/pull/7680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7680.patch",
"merged_at": 1602256636000
} |
https://api.github.com/repos/huggingface/transformers/issues/7679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7679/comments | https://api.github.com/repos/huggingface/transformers/issues/7679/events | https://github.com/huggingface/transformers/issues/7679 | 718,212,039 | MDU6SXNzdWU3MTgyMTIwMzk= | 7,679 | TFEncoderDecoder | {
"login": "discoveredcheck",
"id": 25035016,
"node_id": "MDQ6VXNlcjI1MDM1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/25035016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/discoveredcheck",
"html_url": "https://github.com/discoveredcheck",
"followers_url": "https://api.github.com/users/discoveredcheck/followers",
"following_url": "https://api.github.com/users/discoveredcheck/following{/other_user}",
"gists_url": "https://api.github.com/users/discoveredcheck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/discoveredcheck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/discoveredcheck/subscriptions",
"organizations_url": "https://api.github.com/users/discoveredcheck/orgs",
"repos_url": "https://api.github.com/users/discoveredcheck/repos",
"events_url": "https://api.github.com/users/discoveredcheck/events{/privacy}",
"received_events_url": "https://api.github.com/users/discoveredcheck/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | # 🚀 Feature request
Tensorflow version for the `EncoderDecoder` class for sequence to sequence text generation.
## Motivation
I am replicating [this](https://arxiv.org/pdf/1907.12461.pdf) paper which studies several combinations of encoder and decoder with pretrained checkpoints. Most of it is implemented very nicely in the current API, but is not available for tensorflow. The closest thing I found was `TFT5ForConditionalGeneration` which works nicely. Are there any plans to extend this for other pretrained models such as Bert, Roberta, GPT(2)?
## Your contribution
Happy to work on a PR if someone can provide some pointers on where to start and best practices to include new models in the existing API.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7679/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7678/comments | https://api.github.com/repos/huggingface/transformers/issues/7678/events | https://github.com/huggingface/transformers/pull/7678 | 718,158,515 | MDExOlB1bGxSZXF1ZXN0NTAwNTk5OTU4 | 7,678 | Fix dataset cardinality | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,686 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes an issue with the generic text classification example where the size of the datasets were not properly set.
Fixes #7637 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7678/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7678",
"html_url": "https://github.com/huggingface/transformers/pull/7678",
"diff_url": "https://github.com/huggingface/transformers/pull/7678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7678.patch",
"merged_at": 1602254305000
} |
https://api.github.com/repos/huggingface/transformers/issues/7677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7677/comments | https://api.github.com/repos/huggingface/transformers/issues/7677/events | https://github.com/huggingface/transformers/issues/7677 | 718,138,408 | MDU6SXNzdWU3MTgxMzg0MDg= | 7,677 | Batch and smart batch support for pipelines. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I see the following requirements:\r\n\r\n1. automatic batching\r\n2. maybe smart batching\r\n3. multi GPU support\r\n4. Tokenization with multiple processes in parallel to the prediction\r\n5. `max_length` and `truncation` support\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | # 🚀 Feature request
## Motivation
I want to use `TextClassificationPipeline` to classify a large number of texts.
The naive approach is:
```python
model = AutoModelForSequenceClassification.from_pretrained(model_dir)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
pipeline = TextClassificationPipeline(
model=model,
tokenizer=tokenizer,
framework="pt",
device=0,
)
results = pipeline(unlabeled_text_list)
```
But this gives me CUDA OOM error when `unlabeled_text_list` is long.
What about adding batch support that lets you specify the batch size and maybe also support for multiprocessing tokenization?
When possible smart batching would be nice. See this: https://github.com/UKPLab/sentence-transformers/issues/454#issuecomment-699496454
What do you think? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7677/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7677/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7676/comments | https://api.github.com/repos/huggingface/transformers/issues/7676/events | https://github.com/huggingface/transformers/issues/7676 | 718,099,209 | MDU6SXNzdWU3MTgwOTkyMDk= | 7,676 | TFTrainer doesn't work | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nIt looks to be an XLA error due to a precision issue. Did you train your model with mixed precision?",
"No, It was trained with the official Bert script on TPU without mixed precision.",
"By official Bert script you mean this one ? https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_pretraining.py\r\n\r\nTo be sure it is coming from the trainer can you try with the `bert-base-cased` without the `from_pt=True` and let us know if the training finally starts.",
"By the official Bert I mean script, I mean:\r\nhttps://github.com/google-research/bert\r\n\r\nWith the `bert-base-cased` model without the `from_pt=True`, I get another error:\r\n```\r\nINFO:tensorflow:Initializing the TPU system: grpc://10.112.10.242:8470\r\nINFO:tensorflow:Clearing out eager caches\r\nINFO:tensorflow:Clearing out eager caches\r\nINFO:tensorflow:Finished initializing TPU system.\r\nINFO:tensorflow:Finished initializing TPU system.\r\nWARNING:absl:`tf.distribute.experimental.TPUStrategy` is deprecated, please use the non experimental symbol `tf.distribute.TPUStrategy` instead.\r\nINFO:tensorflow:Found TPU system:\r\nINFO:tensorflow:Found TPU system:\r\nINFO:tensorflow:*** Num TPU Cores: 8\r\nINFO:tensorflow:*** Num TPU Cores: 8\r\nINFO:tensorflow:*** Num TPU Workers: 1\r\nINFO:tensorflow:*** Num TPU Workers: 1\r\nINFO:tensorflow:*** Num TPU Cores Per Worker: 8\r\nINFO:tensorflow:*** Num TPU Cores Per Worker: 8\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)\r\nINFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)\r\nDownloading: 100%\r\n527M/527M [00:07<00:00, 74.8MB/s]\r\n\r\nSome weights of the model checkpoint at bert-base-cased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']\r\n- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['dropout_37', 'classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n---------------------------------------------------------------------------\r\nInvalidArgumentError Traceback (most recent call last)\r\n<ipython-input-15-e4258565c051> in <module>()\r\n 21 )\r\n 22 \r\n---> 23 trainer.train()\r\n\r\n4 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in train(self)\r\n 472 Train method to train the model.\r\n 473 \"\"\"\r\n--> 474 train_ds = self.get_train_tfdataset()\r\n 475 \r\n 476 if self.args.debug:\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in get_train_tfdataset(self)\r\n 135 \r\n 136 self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps\r\n--> 137 self.num_train_examples = tf.data.experimental.cardinality(self.train_dataset).numpy()\r\n 138 \r\n 139 if self.num_train_examples < 0:\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in numpy(self)\r\n 1061 \"\"\"\r\n 1062 # TODO(slebedev): Consider avoiding a copy for non-CPU or remote tensors.\r\n-> 1063 maybe_arr = self._numpy() # pylint: disable=protected-access\r\n 1064 return maybe_arr.copy() if isinstance(maybe_arr, np.ndarray) else maybe_arr\r\n 1065 \r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)\r\n 1029 return self._numpy_internal()\r\n 1030 except core._NotOkStatusException as e: # pylint: disable=protected-access\r\n-> 1031 six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access\r\n 1032 \r\n 1033 @property\r\n\r\n/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)\r\n\r\nInvalidArgumentError: Unable to parse tensor proto\r\n```\r\n\r\nIt works fine with Pytorch but not with tensorflow for some reason.",
"Ok, then the error is normal. The TF part of transformers don't take into account a model that comes straight from the official BERT script. You have to:\r\n\r\n1. Convert your checkpoint with this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) if you are using the TF1 checkpoint or this [one](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf2_checkpoint_to_pytorch.py) if you are using the TF2 checkpoint.\r\n2. Once you get the checkpoints in PyTorch use this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py) to get your model in a proper transformers format and use it in your pipeline.\r\n\r\nWith `bert-base-cased` the error you get is because you have to apply a cardinality to your dataset with:\r\n\r\n```\r\nmy_dataset.apply(tf.data.experimental.assert_cardinality(number_of_examples_in_my_dataset))\r\n```",
"Thanks a lot @jplu , that did solve my issue.\r\n\r\nThe problem was in the second step which is converting the Pytorch to tf2 checkpoint.\r\n\r\nBy the way, there is a bug in the conversion script:\r\n\r\n```\r\n2020-10-09 21:45:28.508061: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n====================================================================================================\r\n Converting model type 1/1: bert\r\n====================================================================================================\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py\", line 437, in <module>\r\n only_convert_finetuned_models=args.only_convert_finetuned_models,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py\", line 322, in convert_all_pt_checkpoints_to_tf\r\n config_class, model_class, pt_model_class, aws_model_maps, aws_config_map = MODEL_CLASSES[model_type]\r\nValueError: not enough values to unpack (expected 5, got 4)\r\n```\r\n\r\nI had to call directly the \"convert_pt_checkpoint_to_tf\" function inside the file, because \"MODEL_CLASSES[model_type]\" has only 4 objects while in \"convert_all_pt_checkpoints_to_tf\" it tries to extract 5 objects .",
"Happy it worked.\r\n\r\nCan you open a new issue about the error you got with the converting script. Thanks."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
tensorflow: @jplu
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Protein Sequence dataset
## To reproduce
Steps to reproduce the behavior:
https://colab.research.google.com/drive/1v0FMM_iuRSixvDaoHaiP77pel7qkoYL8?usp=sharing
```
WARNING:tensorflow:TPU system grpc://10.12.199.226:8470 has already been initialized. Reinitializing the TPU can cause previously created variables on TPU to be lost.
WARNING:tensorflow:TPU system grpc://10.12.199.226:8470 has already been initialized. Reinitializing the TPU can cause previously created variables on TPU to be lost.
INFO:tensorflow:Initializing the TPU system: grpc://10.12.199.226:8470
INFO:tensorflow:Initializing the TPU system: grpc://10.12.199.226:8470
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Finished initializing TPU system.
INFO:tensorflow:Finished initializing TPU system.
WARNING:absl:`tf.distribute.experimental.TPUStrategy` is deprecated, please use the non experimental symbol `tf.distribute.TPUStrategy` instead.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cbf60>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb710>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb7b8>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73128>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b730b8>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73048>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb9b0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb5c0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cbef0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb518>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb4e0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b734e0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73470>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73400>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73390>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73320>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b732b0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73240>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b731d0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
---------------------------------------------------------------------------
InternalError Traceback (most recent call last)
<ipython-input-18-06617a21566a> in <module>()
11
12 with training_args.strategy.scope():
---> 13 model = TFAutoModelForSequenceClassification.from_pretrained(model_name, from_pt=True)
14
15 trainer = TFTrainer(
26 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in shape(self)
1165 # `_tensor_shape` is declared and defined in the definition of
1166 # `EagerTensor`, in C.
-> 1167 self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple())
1168 except core._NotOkStatusException as e:
1169 six.raise_from(core._status_to_exception(e.code, e.message), None)
InternalError: RET_CHECK failure (platforms/xla/service/jellyfish/bounds_check.cc:427) allocation_size_words <= std::numeric_limits<int32>::max()
```
## Expected behavior
I am following the example for fine-tuning custom dataset:
https://huggingface.co/transformers/custom_datasets.html
It works with Pytorch, but with tensorflow it doesn't work. Using TPU it gives the above error, and using GPU it just doesn't start.
Any idea what I did wrong ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7676/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7675/comments | https://api.github.com/repos/huggingface/transformers/issues/7675/events | https://github.com/huggingface/transformers/issues/7675 | 718,084,179 | MDU6SXNzdWU3MTgwODQxNzk= | 7,675 | Add FAVOR+ / Performer attention | {
"login": "marrrcin",
"id": 6958772,
"node_id": "MDQ6VXNlcjY5NTg3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6958772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrrcin",
"html_url": "https://github.com/marrrcin",
"followers_url": "https://api.github.com/users/marrrcin/followers",
"following_url": "https://api.github.com/users/marrrcin/following{/other_user}",
"gists_url": "https://api.github.com/users/marrrcin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marrrcin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrrcin/subscriptions",
"organizations_url": "https://api.github.com/users/marrrcin/orgs",
"repos_url": "https://api.github.com/users/marrrcin/repos",
"events_url": "https://api.github.com/users/marrrcin/events{/privacy}",
"received_events_url": "https://api.github.com/users/marrrcin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Just for reference, there is two open-source MIT implementations in pytorch.\r\n\r\nhttps://github.com/lucidrains/performer-pytorch\r\nAnd\r\nhttps://github.com/idiap/fast-transformers",
"This could prove particularly important for longer sequences like protein sequences and long texts.\r\nHigh level overview at https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html.",
"if this could be implemented it would be dope!",
"It would be nice to make it possible to use FAVOR+ in combination with the pretrained models that use softmax attention— at least the popular ones like BERT. Or even better, someone would just do the fine tuning for the common pretrained models and then we could make those available out of the box. I should be able to do that for DistilBERT since I plan to be using DistilBERT + FAVOR for a project soon.",
"Just started a fork to work on this at https://github.com/norabelrose/transformers-plus-performers. Is it okay with everyone if I implement it by creating a new file implementing FAVOR+ multihead attention (maybe one file for the PyTorch implementation and one for the TF implementation), then adding an option to BertConfig and DistilBertConfig (and maybe other model config classes) allowing the user to select FAVOR+ as the attention implementation?\r\n\r\nIt just seems sort of silly and wasteful to create multiple entirely new models for this when FAVOR+ has backwards compatibility.\r\n\r\nAlso since FAVOR+ is an unbiased estimator of full softmax attention, it should be possible to have an option that would tell the model to dynamically switch between FAVOR+ and full attention at test time depending on the sequence length. This would be desirable since FAVOR+ is slower than softmax attention when the sequence is shorter than O(d*log(d)), where d is the number of dimensions per attention head. Implementing such dynamic switching would be easier and more elegant if FAVOR+ is just a config option and not a new model class.",
"Any update on the implementation of this new architecture? @norabelrose",
"@marcoabrate The initial implementation is complete at https://github.com/norabelrose/transformers-plus-performers/blob/performers/src/transformers/modeling_performer_attention.py. Haven't been able to test it yet because getting my hands on the right datasets for finetuning DistilBERT with Performer attention, preprocessing the data, etc. has proven to be a huge ordeal. Should hopefully be able to do it today though.",
"UPDATE: The most recent commit on my transformers-plus-performers repo is now up and running. Right now I only changed DistilBertModel and DistilBertConfig to enable them to use Performer attention (just set attention_type='performer'), but it should be quite trivial to add the feature to other models.\r\n\r\nAs I type this I'm fine-tuning the distilbert-base-uncased pretrained model to work with Performer attention by distilling it against bert-base-uncased. You should be able to just directly fine-tune it with MLM but I figured that distillation might get you better results. It seems to be converging rather quickly but I haven't been running it for long and I only have one GPU to work with.\r\n\r\nI would welcome other people taking a look at my repo and submitting pull requests to it.",
"> FAVOR+ is slower than softmax attention when the sequence is shorter than O(d*log(d)), where d is the number of dimensions per attention head\r\n\r\nWhat are those numbers for DistilBERT, BERT-base and BERT-large?\r\n\r\nDid you compare real speed?",
"I haven't had a chance to compare the difference on actual models yet, but I should be able to do that in the next day or two.\r\n\r\nI have, however, tested the speed difference between softmax attention and FAVOR+ on random Gaussian matrices. FAVOR+ really starts to get faster when the sequence length is ~18 times larger than d*ln(d), at least on my GPU. With BERT settings (d_model = 768, num_heads = 12) that means about 5000 tokens.\r\n\r\n\r\n\r\n\r\nThis is basically because you have to matrix-multiply Q and K by the random feature matrix, which you don't have to do for softmax attention. You get better results with Performer when (d_model / num_heads) is smaller:\r\n\r\n\r\n\r\nI should mention that while FAVOR+ might be slower than softmax for some of these \"medium\" sequence lengths, it should still be using less _memory_ than softmax, since it isn't allocating that L x L attention matrix. So there's somewhat of a memory-time tradeoff here.\r\n\r\nThe numbers I show above are from my own implementation of FAVOR+, but I also tried it with the performer_pytorch implementation and got almost identical results. Really, FAVOR+ is an attention mechanism for long sequences. It's got this great unique property that it's an unbiased estimator of softmax attention. That means that you can easily use it with models that were pretrained on softmax attention, and you can switch between FAVOR+ and softmax at inference time. And that's why it should be part of Huggingface.",
"UPDATE: While I have Performer up and running with DistilBertModel, I've run into a problem that I didn't even think about when I started. DistilBERT, BERT, RoBERTa, and several other models use _learned_ positional embeddings, which impose a fixed 512-token max sequence length. In order to process sequences longer than 512 tokens, and thereby get the benefits of Performer attention, we'll need to use some other type of positional embeddings; for maximum flexibility, probably fixed sinusoidal embeddings with some large max sequence length. We could also try using relative position embeddings, although AFAIK no one has actually tried doing that with Performer attention and I would need to think about it a bit to figure out if that's actually feasible. DistilBertModel actually already comes with a sinusoidal_pos_embds option, but this option is overridden when you load the weights from a pretrained model.\r\n\r\nIt's not clear how hard it would be to finetune a pretrained model that was trained with learned positional embeddings to use fixed sinusoidal ones, or if it would even be worth it— it may be necessary to just train them from scratch, especially since we are _also_ trying to swap out the attention mechanism. I'll try finetuning soon and see what happens. But it's looking more likely that we won't be able to just plug in the already existing checkpoints like we initially hoped. If that turns out to be the case, it would be really great if someone with access to more than one GPU could do the training from scratch and upload the models :)\r\n\r\nPS: After @djstrong's comment about FAVOR+'s performance on relatively short sequences, I wanted to get to the bottom of why FAVOR+ was so much slower until you get up to around 5000 tokens. Oddly enough, it turns out that the torch.max() operation which is used to generate the numerical stabilizer for the exp() kernel was the main culprit. When you don't use a stabilizer, Performer attention starts beating softmax attention at much shorter sequence lengths. So I added an option in PerformerAttentionConfig to turn off the stabilizer.",
"https://github.com/huggingface/transformers/issues/8893\r\n\r\nTensorflow code, not jax. Thank you.",
"@guotong1988 as of about half an hour ago, my fork now has a TensorFlow implementation: https://github.com/norabelrose/transformers-plus-performers/blob/performers/src/transformers/modeling_tf_performer_attention.py.\r\n\r\nI have not had a chance to test it at all. If someone else could at least try getting it working on their own system that would be great. Pull requests are welcome.",
"Hey @norabelrose , I'm part of the Performer team at Google, it's great to see this getting added to huggingface! Would you be open to meeting so we can discuss how we can work together on this? If anyone else is interested in joining the meeting please comment here and I'll reach out to coordinate.",
"@tomweingarten Sure! Send me an email at [email protected] and we can set up a time to talk in the next couple weeks. As I mentioned above, the basic implementation in PyTorch and TensorFlow is done but we need to write unit tests and make sure everything is working properly. \r\n\r\nAlso, in my fork at transformers-plus-performers I had to make a few minor changes to other parts of HuggingFace in order to get training to run smoothly on my machine— in particular, the distillation example program, since I initially tested PerformerAttention by continuing distillation of a pretrained DistilBERT model with Performer attention against bert-base. The implementation of distillation on master loads all the training data at once into RAM, which blows up on my meager hardware. I changed it so that you can load the training data incrementally. That's probably a good thing to add to the master branch, but arguably it should be put in a separate pull request. So we'll have to change that, a long with a couple other little things.\r\n\r\nI'd recommend you check out my fork at https://github.com/norabelrose/transformers-plus-performers/. The relevant files are /src/transformers/configuration_performer_attention.py, /src/transformers/modeling_performer_attention.py, and /src/transformers/modeling_tf_performer_attention.py. I also changed the BERT and DistilBERT model and config files so the user can use Performer attention with them. I'll accept pull requests on that repo.\r\n\r\nPS: Also just realizing that the definition of short_sequence_behavior on PerformerAttentionConfig in the last commit is defined variously as Union[str, dict], Union[str, Callable], or Union[str, tuple]— sorry about that, I wasn't really sure how best to implement that feature. Right now the actual implementation in PerformerAttention assumes it's a str or Callable.",
"@tomweingarten @norabelrose I would like to participate in the meeting too, if possible. I am working with long sequences for summarization. I have not had the chance to go through the code thoroughly yet, but I am ready to help soon.\r\n\r\nEdit: you can reach me at [email protected]",
"@norabelrose Is there any plan to support unidirectional attention ?",
"Hi guys, thanks to @kchoro and @ValeryTyumen on the Performers team, we've open-sourced the Tensorflow version of FAVOR+ here: https://github.com/google-research/google-research/tree/master/performer/fast_attention/tensorflow\r\n\r\nBTW, we've edited the folder name and code to be `fast_attention` now rather than `fast_self_attention`.\r\n\r\nPlease let us know how well it works in your pipelines!",
"UPDATE: The new default branch (\"clean\") on my fork at https://github.com/norabelrose/transformers-plus-performers/ now has all the extraneous changes I made to the upstream removed. I also merged in all new commits from upstream.\r\n\r\n@TwinMooon Yes, we should be able to add causal attention. I was under the impression that it would be necessary to include a custom CUDA kernel from the fast-transformers library to compute the prefix sums— since that's what the performer_pytorch implementation does, which I used as a template for my implementation— but now looking at the Google code in both Jax and TensorFlow I realize that they just compute the prefix sums in Python code and then use a custom gradient. So it looks like it's not necessary, although it's possible that using the CUDA kernel gives you a noticeable speed boost.",
"I'd like to set a goal of making an official pull request to add this to master by the end of the year. I haven't been able to do that yet because I've been busy with school and other projects, and I haven't gotten any help from other contributors. Key things that need to be done are:\r\n- Add causal attention\r\n- Translate the unit tests from the Google implementation and add them to the fork (and make sure we pass those tests, obviously)\r\n- Clean up the short_sequence_behavior feature (or just get rid of it)\r\n\r\nAs always, any and all help with these tasks is welcome.",
"@TwinMooon Update: I got causal attention working by translating the Google implementation, but as I feared, it's very slow since it doesn't use a custom CUDA kernel. On my GPU, it's 19-20 times slower than noncausal attention. But there might be a way around this; I'll have to think about it.\r\n\r\nIn the meantime, I think I'm going to add an optional dependency on the fast_transformers package (just wrapping the import statement in a try... except block) to get access their custom CUDA kernel. I'll include a warning if the user doesn't have it installed that causal attention might have bad performance without the package. That's what the performer_pytorch package does.",
"@norabelrose For the past two days, I have implemented a version of causal attention by just translating Google's TensorFlow implementation. After reading your code, I found that our implementation is quite similar. However, The causal version runs a little faster than the non-casual version in my machine. \r\nMy PyTorch version is 1.5.0 and run it in a 2080Ti with CUDA 10.0",
"@TwinMooon Ok cool! If you wouldn’t mind submitting a pull request to my fork or just copy and pasting the relevant block of code here then I could check to see if your version is faster. It’s possible that I’m making some silly mistake.\r\n\r\nI’m running it on a GeForce GTX 1080 with PyTorch 1.4.0 and CUDA 10.0.0. It was also noticeably a lot slower than noncausal attention on my CPU only laptop which has PyTorch 1.7.\r\n\r\nPS: Is it possible that you got the tensor shapes mixed up? The Google implementation expects tensors of shape [length, batch, heads, random features/embedding dim] while everywhere else it's usually [batch, heads, length, random features/embedding dim], so you have to permute the tensor dimensions. The code will actually run if you give it tensors with the [B, H, L, D] shape though, so I got tripped up on that when I first translated the Google code and it made it look like it was faster than it actually was. If you're using a small batch size of say, 1 or 5, it'll be a lot faster to compute prefix sums over the batch dimension than doing it over the sequence length dimension of size 512 (which is what it's actually supposed to do).",
"@norabelrose You can review my implementation [here](https://github.com/TwinMooon/transformers-plus-performers/commit/c17d6473deb5316363f60bb2ddd1007d4364abe4). I permuted the tensor shape before stuff into the casual attention. ",
"@TwinMooon In your code, you spell the word \"causal\" two different ways: \"causal\" and \"casual\". You use the \"causal\" spelling in the forward() method where short_sequence_behavior indicates to use softmax attention, and then you use casual everywhere else.\r\n\r\nIs it possible that you're initializing the PerformerAttention object sort of like this:\r\n`PerformerAttention(PerformerAttentionConfig(d_model=768, num_heads=12), causal=True)`\r\nso that the \"casual\" attribute remains its default value of False, and none of the causal attention code ever actually gets called? I should probably change `__init__` so it that it always throws an error when you include a nonexistent attribute in kwargs.\r\n\r\nIn other news, I figured out a sort of clever way of making causal attention like 2x faster, and that's in my latest commit.",
"Mark Zakharov made a Colab where he successfully finetuned a DistilBERT model with the most recent version of my fork, which you can check out here: https://colab.research.google.com/drive/1BUYk4qxdt1b3d5mx6_t0nnX5jP9KwVAv?usp=sharing\r\n\r\nI think the project is almost ready to be made into a formal pull request.",
"@norabelrose cool! I'll try it now.",
"This is really great work guys! We are currently running some experiments on the flax version of Performer internally and looking into how to best integrate the model into Transformers. @norabelrose a PR in PyTorch and or Tensorflow would be amazing!",
"Excited to see the progress here! Just wanted to give a heads-up that we fixed a [significant bug](https://github.com/google-research/google-research/commit/b09ac837cd5720bc60f1c16b472a7ab462b0ddb8) in our TF implementation of Performer fast attention.",
"Pull request finally submitted: #9325 "
] | 1,602 | 1,649 | null | CONTRIBUTOR | null | # 🌟 FAVOR+ / Performer attention addition
Are there any plans to add this new attention approximation block to Transformers library?
## Model description
The new attention mechanism with linear time and space complexity was introduced in
_Rethinking Attention with Performers_ [[https://arxiv.org/abs/2009.14794](https://arxiv.org/abs/2009.14794)].
Authors of the paper claim that the new attention mechanism is backward-compatbile with already existing models
> Backwards compatibility with pretrained models is available as a benefit from softmax approximation, via small finetuning (required due to error propagation)
<!-- Important information -->
## Open source status
* [x] the model implementation is available: it's an original Trax implementation from Google: https://github.com/google-research/google-research/tree/master/performer/fast_self_attention
* [ ] the model weights are available: probably not required as it's a building block for models rather than fully new architecture
* [x] who are the authors: Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7675/reactions",
"total_count": 111,
"+1": 58,
"-1": 0,
"laugh": 0,
"hooray": 19,
"confused": 0,
"heart": 18,
"rocket": 16,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7675/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7674/comments | https://api.github.com/repos/huggingface/transformers/issues/7674/events | https://github.com/huggingface/transformers/issues/7674 | 717,999,062 | MDU6SXNzdWU3MTc5OTkwNjI= | 7,674 | Correctly tokenize sentence pairs | {
"login": "datistiquo",
"id": 47474379,
"node_id": "MDQ6VXNlcjQ3NDc0Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datistiquo",
"html_url": "https://github.com/datistiquo",
"followers_url": "https://api.github.com/users/datistiquo/followers",
"following_url": "https://api.github.com/users/datistiquo/following{/other_user}",
"gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions",
"organizations_url": "https://api.github.com/users/datistiquo/orgs",
"repos_url": "https://api.github.com/users/datistiquo/repos",
"events_url": "https://api.github.com/users/datistiquo/events{/privacy}",
"received_events_url": "https://api.github.com/users/datistiquo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This would be still interesting.\r\n\r\nFor tokenizing a list of pairs I get \r\n\r\n```\r\ninput_ids = tokenizer(pairs, max_length=50, padding=\"max_length\",truncation=True, return_tensors=\"tf\")\r\n\r\n 'token_type_ids': <tf.Tensor: shape=(1, 50), dtype=int32, numpy=\r\narray([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0],\r\n....\r\n```\r\nSo I wonder I am doing right as it seems the padded values are connetced to the 1. sentences (0 id)\r\nSO token type IDS are 0 for the padded places, is that right?",
"Hey @datistiquo ,\r\n\r\nAs one can see in the following script:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ninput_ids = tokenizer([[\"hey hello there\", \"what is going on\"], [\"peter is in the\", \"what is going on\"]], max_length=20, padding=\"max_length\", truncation=True, return_tensors=\"tf\")\r\n\r\nprint(\"List of pairs\", input_ids)\r\n\r\ninput_ids = tokenizer(\"hey hello there\", \"what is going on\", max_length=20, padding=\"max_length\", truncation=True, return_tensors=\"tf\")\r\n\r\nprint(\"Pair\", input_ids)\r\n```\r\n\r\ntokenizing a list of pairs should be done exactly as proposed by you. Regarding the token_type_ids it is also correct that padded places should have a value of 0. In general if a model does not make use of `token_type_ids`, we return a 0 for such a model, see: https://github.com/huggingface/transformers/blob/6b034309ca4ca2ec6e5c3cacda92a448fa10b921/src/transformers/models/roberta/tokenization_roberta.py#L233 . So for padded tokens that should be discarded in the model, 0 seems like the most sensible choice to me.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,602 | 1,614 | 1,614 | NONE | null | Hey,
I saw different ways to tokenize sentence pairs and the intuitive one is not shown here:
https://huggingface.co/transformers/preprocessing.html#preprocessing-pairs-of-sentences
So, I am asking here if I do right.
I encode pairs of sentence using list of lists. Instead handing over two seperate lists for each sentences, I handover a list of list, where each element is a list with a single sentence pair. So:
pairs=[[sen1, sen2],[sen1,sen2],....]
Is this hopefully right too? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7674/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7673/comments | https://api.github.com/repos/huggingface/transformers/issues/7673/events | https://github.com/huggingface/transformers/issues/7673 | 717,904,596 | MDU6SXNzdWU3MTc5MDQ1OTY= | 7,673 | squad data preprocessor error (list index out of range) while finetuning bert on squad 1.1 | {
"login": "dineshggaonkar",
"id": 52095176,
"node_id": "MDQ6VXNlcjUyMDk1MTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/52095176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dineshggaonkar",
"html_url": "https://github.com/dineshggaonkar",
"followers_url": "https://api.github.com/users/dineshggaonkar/followers",
"following_url": "https://api.github.com/users/dineshggaonkar/following{/other_user}",
"gists_url": "https://api.github.com/users/dineshggaonkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dineshggaonkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dineshggaonkar/subscriptions",
"organizations_url": "https://api.github.com/users/dineshggaonkar/orgs",
"repos_url": "https://api.github.com/users/dineshggaonkar/repos",
"events_url": "https://api.github.com/users/dineshggaonkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/dineshggaonkar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@dineshggaonkar Did you fix it?\r\n"
] | 1,602 | 1,619 | 1,608 | NONE | null | run_squad.py throws this error on squad v1.1 dataset
Traceback (most recent call last):
File "run_squad.py", line 820, in <module>
main()
File "run_squad.py", line 762, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "run_squad.py", line 446, in load_and_cache_examples
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
File "/home/din/question_answering_deepQA/venv_indic_deepQA/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 602, in get_train_examples
return self._create_examples(input_data, "train")
File "/home/din/question_answering_deepQA/venv_indic_deepQA/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 656, in _create_examples
answers=answers,
File "/home/din/question_answering_deepQA/venv_indic_deepQA/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 729, in __init__
self.start_position = char_to_word_offset[start_position_character]
IndexError: list index out of range
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7673/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7672/comments | https://api.github.com/repos/huggingface/transformers/issues/7672/events | https://github.com/huggingface/transformers/pull/7672 | 717,854,722 | MDExOlB1bGxSZXF1ZXN0NTAwMzQ2NjIz | 7,672 | [pegasus] Faster tokenizer tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | This PR implements #7354
* The suggested `fixtures/test_sentencepiece.model` couldn't be used since it has wrong special token ids: we need
1. no bos
2. eos_id is 1
3. unk_id is 2
added a script that builds a custom tokenizer model: `test_sentencepiece_no_bos.model`. Had to figure out how to match `"google/pegasus-large"` spm file. See the build script for nuances.
* switched pegasus common tests to use the newly added `test_sentencepiece_no_bos.model` - 2 custom tests still use the large tokenizer - remained untouched
And a few extra tweaks I made while sorting this PR out:
* removed `get_vocab` in `tokenization_pegasus.py` as it's identical to superclass's one
* a few minor prose edits in related files
* expanded `testing_utils.py`' s `get_tests_dir` to accept an optional `append_path` arg to remove clutter from tests. will probably rename it in the future to something else, works for now.
Fixes #7354
@sshleifer, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7672/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7672",
"html_url": "https://github.com/huggingface/transformers/pull/7672",
"diff_url": "https://github.com/huggingface/transformers/pull/7672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7672.patch",
"merged_at": 1602256233000
} |
https://api.github.com/repos/huggingface/transformers/issues/7671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7671/comments | https://api.github.com/repos/huggingface/transformers/issues/7671/events | https://github.com/huggingface/transformers/pull/7671 | 717,837,616 | MDExOlB1bGxSZXF1ZXN0NTAwMzMyMjg1 | 7,671 | fix nn.DataParallel compatibility with PyTorch 1.5 | {
"login": "guhur",
"id": 12297742,
"node_id": "MDQ6VXNlcjEyMjk3NzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/12297742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guhur",
"html_url": "https://github.com/guhur",
"followers_url": "https://api.github.com/users/guhur/followers",
"following_url": "https://api.github.com/users/guhur/following{/other_user}",
"gists_url": "https://api.github.com/users/guhur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guhur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guhur/subscriptions",
"organizations_url": "https://api.github.com/users/guhur/orgs",
"repos_url": "https://api.github.com/users/guhur/repos",
"events_url": "https://api.github.com/users/guhur/events{/privacy}",
"received_events_url": "https://api.github.com/users/guhur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"(tagging @eltoto1219 for information)"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | The same type of errors as in https://github.com/huggingface/transformers/pull/4300
# What does this PR do?
DataParallel replicate has a known issue in PyTorch 1.5: https://github.com/pytorch/pytorch/issues/40457
A similar PR proposes a work around by removing the `next(self.parameters().dtype)`: https://github.com/huggingface/transformers/pull/4300/files/7eef4f5a7575e05e822f8ef45d7f473a102671aa
I did the same in LXMERT
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7671/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7671",
"html_url": "https://github.com/huggingface/transformers/pull/7671",
"diff_url": "https://github.com/huggingface/transformers/pull/7671.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7671.patch",
"merged_at": 1602234908000
} |
https://api.github.com/repos/huggingface/transformers/issues/7670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7670/comments | https://api.github.com/repos/huggingface/transformers/issues/7670/events | https://github.com/huggingface/transformers/pull/7670 | 717,768,023 | MDExOlB1bGxSZXF1ZXN0NTAwMjc1MTQ1 | 7,670 | [s2s] Switch README urls to cdn | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7670/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7670",
"html_url": "https://github.com/huggingface/transformers/pull/7670",
"diff_url": "https://github.com/huggingface/transformers/pull/7670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7670.patch",
"merged_at": 1602206542000
} |
https://api.github.com/repos/huggingface/transformers/issues/7669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7669/comments | https://api.github.com/repos/huggingface/transformers/issues/7669/events | https://github.com/huggingface/transformers/pull/7669 | 717,624,717 | MDExOlB1bGxSZXF1ZXN0NTAwMTU1MzU0 | 7,669 | Update XLM-RoBERTa pretrained model details | {
"login": "noahtren",
"id": 32682811,
"node_id": "MDQ6VXNlcjMyNjgyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/32682811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahtren",
"html_url": "https://github.com/noahtren",
"followers_url": "https://api.github.com/users/noahtren/followers",
"following_url": "https://api.github.com/users/noahtren/following{/other_user}",
"gists_url": "https://api.github.com/users/noahtren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noahtren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noahtren/subscriptions",
"organizations_url": "https://api.github.com/users/noahtren/orgs",
"repos_url": "https://api.github.com/users/noahtren/repos",
"events_url": "https://api.github.com/users/noahtren/events{/privacy}",
"received_events_url": "https://api.github.com/users/noahtren/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7669/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7669",
"html_url": "https://github.com/huggingface/transformers/pull/7669",
"diff_url": "https://github.com/huggingface/transformers/pull/7669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7669.patch",
"merged_at": 1602235019000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7668/comments | https://api.github.com/repos/huggingface/transformers/issues/7668/events | https://github.com/huggingface/transformers/issues/7668 | 717,601,196 | MDU6SXNzdWU3MTc2MDExOTY= | 7,668 | Default Model Licenses | {
"login": "ankane",
"id": 220358,
"node_id": "MDQ6VXNlcjIyMDM1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/220358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankane",
"html_url": "https://github.com/ankane",
"followers_url": "https://api.github.com/users/ankane/followers",
"following_url": "https://api.github.com/users/ankane/following{/other_user}",
"gists_url": "https://api.github.com/users/ankane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankane/subscriptions",
"organizations_url": "https://api.github.com/users/ankane/orgs",
"repos_url": "https://api.github.com/users/ankane/repos",
"events_url": "https://api.github.com/users/ankane/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankane/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"`facebook/bart-large-mnli` is `mit` like other pretrained models initially released in [fairseq](https://github.com/pytorch/fairseq#license)\r\n\r\nFor `dbmdz` I'll let @stefan-it chime in, with the caveat that lineage/inheritance of licenses in fine-tuned ML models is (AFAIK) uncharted territory so if you have more info on that subject, please feel free to share it.\r\n\r\nFinally, for models where the license isn't indicated in the model card, please feel free to open a PR to add it.",
"Hi @julien-c @ankane,\r\n\r\nI am also very interested in clarifying the licenses of default models. In particular, I'd like to know the license of `dbmdz/bert-large-cased-finetuned-conll03-english`.\r\n\r\nCheers,\r\n\r\nAlex Combessie",
"Hey @julien-c, thanks for the quick response and `facebook/bart-large-mnli` info. PR submitted.\r\n\r\nRe fine-tuning licensing: Seems like it may fit the definition of \"Derivative Works\" in the Apache 2.0 license, but I don't have any special knowledge here, so will defer further discussion to someone that does.",
"Hi guys,\r\n\r\nsorry for the late reply! I have no strong opinion on that topic, so I would just say that license of our `dbmdz` models will be MIT, because we're usually use this kind of license for both software and our pre-trained LMs :) ",
"Great, thanks @stefan-it! That makes it clear that the model is open source :tada:\r\n\r\nIt'd be good to add a model card with the license. I personally think the most accurate summary of the model license is MIT + Apache-2.0 (unless it wasn't derived from Apache-2.0 work), but will leave it to you and the Transformers team to decide how you want to represent it.\r\n",
"On the technical side, just took a look at the code and our YAML parser would support an array of licenses, so feel free to open a PR with \r\n\r\n```\r\nlicense:\r\n- mit\r\n- apache-2.0\r\n```\r\n\r\nOn the legal side, 🤷♂️",
"Thanks @julien-c, good to know 👍 \r\n\r\nWill wait to hear thoughts from @stefan-it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | Hi, thanks for the great library!
I've been trying to compile a list of licenses for the default models and wanted to share in case others were wondering about it. Here's what I have so far:
*Note: table has been updated based on this discussion.*
Task | Model | License | Model Card w/ License
--- | --- | --- | ---
feature-extraction | distilbert-base-cased | Apache-2.0 | ✓ (added)
sentiment-analysis | distilbert-base-uncased-finetuned-sst-2-english | Apache-2.0 | ✓ (added)
ner | dbmdz/bert-large-cased-finetuned-conll03-english | MIT* (added) |
question-answering | distilbert-base-cased-distilled-squad | Apache-2.0 | ✓ (added)
fill-mask | distilroberta-base | Apache-2.0 | ✓
text-generation | gpt2 | MIT* | ✓
summarization | sshleifer/distilbart-cnn-12-6 | Apache-2.0 | ✓
translation, text2text-generation | t5-base | Apache-2.0 | ✓
zero-shot-classification (PyTorch) | facebook/bart-large-mnli | MIT (added) | ✓ (added)
zero-shot-classification (TensorFlow) | roberta-large-mnli | MIT | ✓
conversational | microsoft/DialoGPT-medium | MIT | ✓
Notes:
- `distil` models without a model card are listed as Apache 2.0 based on this comment: https://github.com/huggingface/transformers/issues/3357#issuecomment-614856396
- `gpt2` was changed from MIT to a custom license earlier this year: [history](https://github.com/openai/gpt-2/commits/master/LICENSE)
- Other `dbmdz` models use MIT (https://github.com/huggingface/transformers/pull/3492), but didn't find info on `dbmdz/bert-large-cased-finetuned-conll03-english`. If the model was fine-tuned from a pretrained BERT model, I imagine it would need to retain Apache 2.0 in addition to how the final model is licensed.
It'd be nice to get clarification on the two models that are missing licenses (and ideally ensure all default models have a clear license going forward). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7668/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7668/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7667/comments | https://api.github.com/repos/huggingface/transformers/issues/7667/events | https://github.com/huggingface/transformers/pull/7667 | 717,572,907 | MDExOlB1bGxSZXF1ZXN0NTAwMTExODQ3 | 7,667 | Add multi-class processor to apply categorical classification | {
"login": "AlaaHamoudah",
"id": 1776802,
"node_id": "MDQ6VXNlcjE3NzY4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1776802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlaaHamoudah",
"html_url": "https://github.com/AlaaHamoudah",
"followers_url": "https://api.github.com/users/AlaaHamoudah/followers",
"following_url": "https://api.github.com/users/AlaaHamoudah/following{/other_user}",
"gists_url": "https://api.github.com/users/AlaaHamoudah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlaaHamoudah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlaaHamoudah/subscriptions",
"organizations_url": "https://api.github.com/users/AlaaHamoudah/orgs",
"repos_url": "https://api.github.com/users/AlaaHamoudah/repos",
"events_url": "https://api.github.com/users/AlaaHamoudah/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlaaHamoudah/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | NONE | null | This PR adds multi-class processor to glue.py to support categorical classification. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7667/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7667",
"html_url": "https://github.com/huggingface/transformers/pull/7667",
"diff_url": "https://github.com/huggingface/transformers/pull/7667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7667.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7666/comments | https://api.github.com/repos/huggingface/transformers/issues/7666/events | https://github.com/huggingface/transformers/issues/7666 | 717,567,864 | MDU6SXNzdWU3MTc1Njc4NjQ= | 7,666 | Clear up confusing translation pipeline task naming | {
"login": "klasocki",
"id": 37274142,
"node_id": "MDQ6VXNlcjM3Mjc0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/37274142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klasocki",
"html_url": "https://github.com/klasocki",
"followers_url": "https://api.github.com/users/klasocki/followers",
"following_url": "https://api.github.com/users/klasocki/following{/other_user}",
"gists_url": "https://api.github.com/users/klasocki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klasocki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klasocki/subscriptions",
"organizations_url": "https://api.github.com/users/klasocki/orgs",
"repos_url": "https://api.github.com/users/klasocki/repos",
"events_url": "https://api.github.com/users/klasocki/events{/privacy}",
"received_events_url": "https://api.github.com/users/klasocki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | # 🚀 Feature request
Hello!
I am using the translation pipeline, and I noticed that even though I have to specify the language when I create the pipeline, the passed model overwrites that. So pipeline created as
`nlp = pipeline('translation_en_to_de', 'Helsinki-NLP/opus-mt-en-jap')`
would translate english to japanese, in contrary to the task name. Is this the intended way of translating other languages, will it change in the future?
Would it be possible to just add a single 'translation' task for pipelines, which would then resolve the languages based on the model (which it seems to do anyway now) ?
## Motivation
It would clear up the current confusion, and make the `pipeline` function singature less prone to change.
It could also possibly reduce code duplication in https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## My contribution
I'd love to help with a PR, though I'm confused: The `SUPPORTED_TASKS` dictionary in pipelines.py contains exactly the same entries for each translation pipeline, even the default model is the same, yet the specific pipelines actually translate to different languages 🤔 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7666/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7666/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7665/comments | https://api.github.com/repos/huggingface/transformers/issues/7665/events | https://github.com/huggingface/transformers/issues/7665 | 717,565,489 | MDU6SXNzdWU3MTc1NjU0ODk= | 7,665 | tokenizer_bert.py not call _clean_text? | {
"login": "liwei-cpp",
"id": 38450168,
"node_id": "MDQ6VXNlcjM4NDUwMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38450168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liwei-cpp",
"html_url": "https://github.com/liwei-cpp",
"followers_url": "https://api.github.com/users/liwei-cpp/followers",
"following_url": "https://api.github.com/users/liwei-cpp/following{/other_user}",
"gists_url": "https://api.github.com/users/liwei-cpp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liwei-cpp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liwei-cpp/subscriptions",
"organizations_url": "https://api.github.com/users/liwei-cpp/orgs",
"repos_url": "https://api.github.com/users/liwei-cpp/repos",
"events_url": "https://api.github.com/users/liwei-cpp/events{/privacy}",
"received_events_url": "https://api.github.com/users/liwei-cpp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | NONE | null | for transformers/src/transformers/tokenization_bert.py, there is a function called _clean_text.
But seems this function is not be called at all?
In google bert(https://github.com/google-research/bert/blob/master/tokenization.py) there exists a same function and that function has been called at the beginning of the tokenization. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7665/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7664/comments | https://api.github.com/repos/huggingface/transformers/issues/7664/events | https://github.com/huggingface/transformers/issues/7664 | 717,556,298 | MDU6SXNzdWU3MTc1NTYyOTg= | 7,664 | TF Slow test CI | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"You're correct, it is not currently running as there were some issues setting up both the PT/TF test suites. Will look into it this afternoon.",
"The slow tests in TF take an absurdly long time. I had to stop them from running after ~3.5 hours as it was holding the whole test suite back. Will investigate more on a separate machine and try to skin it down.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | I Don't think tf slow tests are run by circleci OR github actions.
Should they be @LysandreJik ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7664/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7663/comments | https://api.github.com/repos/huggingface/transformers/issues/7663/events | https://github.com/huggingface/transformers/issues/7663 | 717,555,492 | MDU6SXNzdWU3MTc1NTU0OTI= | 7,663 | 2 slow TF T5 common tests failing on master | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"Yeah, that's a know failure and I didn't manage to make it work yet with the `cast_bool_to_primite(...)` function",
"This should be fixed in the next big TF rework.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | These should probably be run on CI somewhere.
Didn't know whether to assign @patrickvonplaten or @jplu.
These fail in both tf 2.2 and tf 2.3.
#### Command
```bash
RUN_SLOW=1 pytest tests/test_modeling_tf_t5.py -k saved
```
#### Traceback
```
================================================================================= FAILURES =================================================================================
__________________________________________________________ TFT5ModelTest.test_saved_model_with_attentions_output ___________________________________________________________
tests/test_modeling_tf_common.py:223: in test_saved_model_with_attentions_output
self.assertEqual(len(outputs), num_out)
E AssertionError: 5 != 4
_________________________________________________________ TFT5ModelTest.test_saved_model_with_hidden_states_output _________________________________________________________
tests/test_modeling_tf_common.py:185: in test_saved_model_with_hidden_states_output
self.assertEqual(len(outputs), num_out)
E AssertionError: 5 != 4
---------------------------------
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7663/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7662/comments | https://api.github.com/repos/huggingface/transformers/issues/7662/events | https://github.com/huggingface/transformers/issues/7662 | 717,533,161 | MDU6SXNzdWU3MTc1MzMxNjE= | 7,662 | loss.backward() being called twice in Trainer._training_step() | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm not sure how we can expect this to work with the encoding and the usage of the `generate` method directly in the forward method.\r\n\r\n@patrickvonplaten can chime in if I'm wrong, but I believe the `generate` method can not be back-propagated through as it is right now.",
"@LysandreJik I came across one issue (#6105) with `generate` being used in `forward`. My temporary workaround was to introduce `is_finetuning_current_model` into `generate` that will call `generate_text_while_finetuning` instead of `forward` again to avoid the recursion.\r\n\r\nI'm still learning pytorch so I might be wrong on this, and correct me if I am. I checked the `grad_fn` for `input_ids`, `full_generated_gpt2_ids`, and each of them were set to `None`. `tmp_losses`, `losses`, and `loss` all had their `grad_fn` set. My naive assumption is that backpropagation will run up to `tmp_losses`, skip over the `generate` part, and then continue on through the gpt2 model.\r\n\r\nAnother interesting point is that I get the error on different training examples. I set the batch size to 1 and it would produce the error on, say, example 5. Removing example 5 from the training set and rerunning would cause the error on example 3, etc.",
"yes, the `generate()` cannot be used for backpropagation at the moment . ",
"@patrickvonplaten Would that explain why I'm encountering the above error? Do you also mind elaborating on why `generate()` cannnot be usef for backpropagation? I'm interested to hear the details for the sake of my own knowledge.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | **Setup**
pytorch: 1.5.1
huggingface transformers: 3.0.2
python: 3.7.6
OS: Pop!_OS 20.04 on VM
**Sample Code**
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel, TrainingArguments, Trainer
import torch
from torch.utils.data import Dataset
import sys
import pandas as pd
ZERO = sys.float_info.min
ZERO_PT = torch.tensor(ZERO)
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
self.tokenizer.pad_token = self.tokenizer.eos_token
def eval_sentence(self, sent: str):
vec = torch.tensor(sentence_vec(sent), dtype=torch.float, requires_grad=True) # remove punct, lower case, split on space, prepend "<s>", postpend "</s>" start and stop tokens. Returns tensor of ints of vocab.
last_idx = min(max_ngram, len(vec)) #max_ngram is an int
probs = [max(ZERO_PT, pkatz(vec[0:i])) for i in range(2, last_idx + 1)] #pkatz is katz backoff probability and returns a tensor with grad function set.
for i in range(1, len(vec) - last_idx + 1):
j = i + last_idx
probs.append(max(ZERO_PT, pkatz(vec[i:j])))
probs = torch.stack(probs)
log_probs = torch.log(probs)
log_prob = torch.sum(log_probs)
len_tensor = torch.tensor(len(vec), dtype=float, requires_grad=True)
final_prob = torch.true_divide(-log_prob, len_tensor)
return final_prob
def sentence_loss(self, sent: str):
p, l = self.eval_sentence(sent)
return -p
def generate_text_while_finetuning(self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None, ):
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
outputs = (lm_logits,) + transformer_outputs[1:]
return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions)
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
):
max_length = input_ids.shape[1] + 50
full_generated_gpt2_ids = self.generate(input_ids=input_ids,
max_length=max_length,
is_finetuning_current_model=True,
attention_mask=attention_mask,
pad_token_id=50256,
do_sample=True,
top_k=50,
top_p=0.95)
decoded_gen_samples = self.tokenizer.batch_decode(full_generated_gpt2_ids, skip_special_tokens=True)
tmp_losses = [self.sentence_loss(decoded_sample) for decoded_sample in decoded_gen_samples]
losses = torch.stack(tmp_losses)
loss = losses.mean()
return (loss,)
##The code below is the run script.
class MyDataset(Dataset):
def __init__(self, csv_file: str):
self.df = pd.read_csv(csv_file, encoding='ISO-8859-1')
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
text = self.df.iloc[idx, 1]
return text
def my_data_collator(dataset_samples_list):
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
tokenizer.pad_token = tokenizer.eos_token
encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True)
batch = {}
batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])
batch['past'] = None
batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])
batch['position_ids'] = None
batch['head_mask'] = None
batch['inputs_embeds'] = None
batch['labels'] = None
batch['use_cache'] = True
return batch
dataset_train = MyDataset('/path/to/train_dataset.csv')
training_args = TrainingArguments(
output_dir='/path/to/out',
do_train=True,
per_device_train_batch_size=64,
logging_dir='/path/to/dir',
max_steps=300000
)
model = GPT2FinetunedWithNgrams.from_pretrained('gpt2')
trainer = Trainer(
model=model,
args=training_args,
data_collator=my_data_collator,
train_dataset=dataset_train
)
trainer.train()
trainer.save_model('/path/to/model_save_dir')
```
**Issue**
The above code will produce the following error for some training examples:
```python
Traceback (most recent call last):
File "/home/aclifton/ric-2020/textgen/run_finetune_gpt2.py", line 221, in <module>
testfinetune()
File "/home/aclifton/ric-2020/textgen/run_finetune_gpt2.py", line 215, in testfinetune
trainer.train()
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 637, in _training_step
loss.backward()
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
```
What I'm finding is that in `Trainer._training_step()`, some examples are causing `loss.backward()` to be called twice. So the error makes sense as the first call will compute the graph then clear it and the second call is what throws the error. I'm not sure what would cause this to happen and was wondering if others might have an idea?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7661/comments | https://api.github.com/repos/huggingface/transformers/issues/7661/events | https://github.com/huggingface/transformers/pull/7661 | 717,519,415 | MDExOlB1bGxSZXF1ZXN0NTAwMDY3NTY1 | 7,661 | [pseudo] Switch URLS to CDN | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Switch s3 urls -> CDN urls.
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7661/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7661",
"html_url": "https://github.com/huggingface/transformers/pull/7661",
"diff_url": "https://github.com/huggingface/transformers/pull/7661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7661.patch",
"merged_at": 1602180759000
} |
https://api.github.com/repos/huggingface/transformers/issues/7660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7660/comments | https://api.github.com/repos/huggingface/transformers/issues/7660/events | https://github.com/huggingface/transformers/pull/7660 | 717,516,162 | MDExOlB1bGxSZXF1ZXN0NTAwMDY0ODU1 | 7,660 | [broken] tf generate: use model_kwargs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,602 | 1,614 | 1,614 | CONTRIBUTOR | null | @patrickvonplaten , I started trying to get tf generation/cache to be consistent with pytorch, but got stuck trying to get T5 working. I figured I would share in case you see an easy fix/want to take over. Otherwise, feel free to ignore :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7660/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7660",
"html_url": "https://github.com/huggingface/transformers/pull/7660",
"diff_url": "https://github.com/huggingface/transformers/pull/7660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7660.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7659/comments | https://api.github.com/repos/huggingface/transformers/issues/7659/events | https://github.com/huggingface/transformers/pull/7659 | 717,423,444 | MDExOlB1bGxSZXF1ZXN0NDk5OTg4NDM5 | 7,659 | [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok ready for review @LysandreJik @sgugger.\r\n\r\nIt's pretty big sorry.\r\n\r\nFor now sentencepiece is still in the requirements as removing it has some effect on the pipeline tests and I think it's probably good to study this in a separate future PR.\r\n\r\nThere is no breaking change apart from the fact that importing the `**Fast` tokenizer directly from the `transformers.tokenization_xxx` is not possible anymore, they should be imported from `transformers` (the best and most robust choice) or from their new respective location at `transformers.tokenization_xxx_fast`.",
"Ok the `examples/seq2seq/test_seq2seq_examples.py::test_finetune[stas/tiny-wmt19-en-de]` is working now.\r\nI'll address the other comments and we can merge on Monday.",
"@thomwolf, could you please assign defaults that are different from \"stas/tiny-wmt19-en-de\" entry and its contents? Otherwise it defeats the purpose of testing with this model, since defaults are used instead.\r\n\r\nAlternatively, I will need to create a new tiny model with different config and change tests to use that instead.\r\n\r\nOnce this is done let's add this test that I tried to add here: https://github.com/huggingface/transformers/pull/7860 - I expanded it below a bit to do better testing:\r\n\r\n```\r\ndiff --git a/tests/test_tokenization_fsmt.py b/tests/test_tokenization_fsmt.py\r\nindex c3e08d56..833b1742 100644\r\n--- a/tests/test_tokenization_fsmt.py\r\n+++ b/tests/test_tokenization_fsmt.py\r\n@@ -24,6 +24,7 @@ from transformers.tokenization_fsmt import VOCAB_FILES_NAMES, FSMTTokenizer\r\n\r\n from .test_tokenization_common import TokenizerTesterMixin\r\n\r\n+FSMT_TINY = \"stas/tiny-wmt19-en-de\"\r\n\r\n class FSMTTokenizationTest(TokenizerTesterMixin, unittest.TestCase):\r\n tokenizer_class = FSMTTokenizer\r\n@@ -86,6 +87,13 @@ class FSMTTokenizationTest(TokenizerTesterMixin, unittest.TestCase):\r\n def tokenizer_en_ru(self):\r\n return FSMTTokenizer.from_pretrained(\"facebook/wmt19-en-ru\")\r\n\r\n+ def test_online_tokenizer_config(self):\r\n+ \"\"\"this just tests that the online tokenizer files get correctly fetched and\r\n+ loaded via its tokenizer_config.json and it's not slow so it's run by normal CI\r\n+ \"\"\"\r\n+ tokenizer = FSMTTokenizer.from_pretrained(FSMT_TINY)\r\n+ self.assertListEqual([tokenizer.src_lang, tokenizer.tgt_lang], [\"en\", \"de\"])\r\n+\r\n def test_full_tokenizer(self):\r\n \"\"\" Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt \"\"\"\r\n tokenizer = FSMTTokenizer(self.langs, self.src_vocab_file, self.tgt_vocab_file, self.merges_file)\r\n\r\n```\r\n\r\nThanks.\r\n",
"Yes feel free to create another model for fsmt @stas00.\r\n\r\nOk this big PR is ready for merge as soon as possible (with regards to other PR merges not absolute time) so it doesn't drift too much.",
"There are some `~transformers.tokenization_utils_base.PreTrainedTokenizer` left (and same with fast) but that's an easy pattern to search for a subsequent PR.",
"Ok then I'm merging this PR and continuing in another one to:\r\n- add CI tests for the package without sentencepiece and tokenizer\r\n- remove sentencepiece as a required dependency\r\n- switch to fast tokenizers by default\r\n- fix the remaining doc patterns that you mentioned\r\n\r\nOn the topic of `from_pretrained` logic, we could (should probably be another PR):\r\n- add a test that the config of the tokenizers is used as mentioned by @stas00\r\n- we could probably remove the hard-coded configs at the same time\r\n- switch to the cloud-front links like the models for faster dowloads",
"**edited**: thanks to @sshleifer - I needed to `pip install -e \".[dev]\"` to update the new dependencies. that fixed the issues.\r\n\r\n----------\r\n\r\nI'm getting a massive amount of identical failures after this merge, primarily:\r\n\r\n```\r\n_____________________________________________ XLNetTokenizationTest.test_num_special_tokens_to_add_equal _____________________________________________\r\n[gw1] linux -- Python 3.8.5 /home/stas/anaconda3/envs/main-38/bin/python\r\n\r\nself = <tests.test_tokenization_xlnet.XLNetTokenizationTest testMethod=test_num_special_tokens_to_add_equal>\r\n\r\n def test_num_special_tokens_to_add_equal(self):\r\n for tokenizer, pretrained_name, kwargs in self.tokenizers_list:\r\n with self.subTest(\"{} ({})\".format(tokenizer.__class__.__name__, pretrained_name)):\r\n> tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs)\r\n\r\ntests/test_tokenization_common.py:1896: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nsrc/transformers/tokenization_utils_base.py:1588: in from_pretrained\r\n return cls._from_pretrained(\r\nsrc/transformers/tokenization_utils_base.py:1661: in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\nsrc/transformers/tokenization_xlnet_fast.py:142: in __init__\r\n super().__init__(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <[AttributeError(\"'XLNetTokenizerFast' object has no attribute 'name_or_path'\") raised in repr()] XLNetTokenizerFast object at 0x7f2ccdaaee80>\r\nargs = (), kwargs = {'additional_special_tokens': ['<eop>', '<eod>'], 'bos_token': '<s>', 'cls_token': '<cls>', 'do_lower_case': False, ...}\r\nslow_tokenizer = None\r\nfast_tokenizer_file = '/home/stas/.cache/torch/transformers/d152c146766f0a31888c4c9c0dcf82e42e42d09bf818bb74e126f2420cbd36c4.ecf1d38c0b94010f431264b9ded85217342f84c7bdae79b0472f7cd20b94052d'\r\n\r\n def __init__(self, *args, **kwargs):\r\n slow_tokenizer = kwargs.pop(\"__slow_tokenizer\", None)\r\n fast_tokenizer_file = kwargs.pop(\"tokenizer_file\", None)\r\n \r\n if fast_tokenizer_file is not None:\r\n # We have a serialization from tokenizers which let us directly build the backend\r\n> fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)\r\nE Exception: data did not match any variant of untagged enum PyNormalizerTypeWrapper at line 1 column 318041\r\n```\r\ndo I need to remove cache or something? I won't test this until you tell me to in case you need someone with the old cache to test that it can recover from this.\r\n\r\nA total of 90 failed tests with this error.",
"You should update `tokenizers` to the main PyPi version @stas00 \r\n```\r\npip install tokenizers --update\r\n```"
] | 1,602 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
Both the [SentencePiece](https://github.com/google/sentencepiece) and [Tokenizers](https://github.com/huggingface/tokenizers) libraries can limit the users:
- `sentencepiece` is not available on Conda on every plateform and one of the reason `transformers` is not on Conda
- `tokenizers` cannot be used inside some labs which need to build all from source and don't have a Rust tooling.
This PR aim at making both optional leveraging the addition of SentencePiece algorithms in Tokenizer.
Note: at least one of `sentencepiece` and `tokenizers` will be required to use the SentencePiece tokenizers. `tokenizers` is also required to use the Fast tokenizers.
Main changes in the library organization:
- fast tokenizers are now separated in `tokenization_XXX_fast.py` files
- a `convert_slow_tokenizer.py` file host conversion methods for a slow to a fast tokenizer but a direct path from a `tokenizers` serialization file is favored when such a file is available.
- the test suite for slow and fast tokenizers are now gathered in a single test suite.
Main new requirements for the tokenizers to pass the new test suite:
- at least one default vocabulary checkpoint (and max length) should be provided, it is used for the deep tests
- the fast tokenizer should have an explicit `tokenizer_file` keyword argument with a default to `None` (we check that to be sure all the fast tokenizer can accept the new serialization format.
To-add:
- when the documentation for `tokenizers` is ready: add a lot of link on how to build and add a fast tokenizer
- add a detailed explanation on how to add a fast tokenizer in the library
This PR also:
- add a `__repr__` for the tokenizers (finally...)
- add a `name_or_path` attribute to the models and tokenizers giving the shortcut name or the path of the pretrained checkpoint used for instantiation
- update the fast tokenizer to use (when possible) the new serialization format of the `tokenizers` library, falling back on the old diverse set of saving format if not available.
- clean up the tests for the fast tokenizers to bring them in the common tokenizer tests
Fixes #7402 #5100 (and maybe others)
## Before submitting
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7659/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7659",
"html_url": "https://github.com/huggingface/transformers/pull/7659",
"diff_url": "https://github.com/huggingface/transformers/pull/7659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7659.patch",
"merged_at": 1603047085000
} |
https://api.github.com/repos/huggingface/transformers/issues/7658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7658/comments | https://api.github.com/repos/huggingface/transformers/issues/7658/events | https://github.com/huggingface/transformers/pull/7658 | 717,239,702 | MDExOlB1bGxSZXF1ZXN0NDk5ODM1ODU3 | 7,658 | Green tests: update torch-hub test dependencies (add protobuf and pin tokenizer 0.9.0-RC2) | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot!"
] | 1,602 | 1,602 | 1,602 | MEMBER | null | # What does this PR do?
Update the torch-hub CI test dependencies to add protobuf and pin tokenizer on 0.9.0-rc2 until final release.
## Who can review?
@sgugger @n1t0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7658/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7658",
"html_url": "https://github.com/huggingface/transformers/pull/7658",
"diff_url": "https://github.com/huggingface/transformers/pull/7658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7658.patch",
"merged_at": 1602156075000
} |
https://api.github.com/repos/huggingface/transformers/issues/7657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7657/comments | https://api.github.com/repos/huggingface/transformers/issues/7657/events | https://github.com/huggingface/transformers/issues/7657 | 717,218,864 | MDU6SXNzdWU3MTcyMTg4NjQ= | 7,657 | SqueezBert link gives a 404 error | {
"login": "mockingbirdz",
"id": 2188799,
"node_id": "MDQ6VXNlcjIxODg3OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2188799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mockingbirdz",
"html_url": "https://github.com/mockingbirdz",
"followers_url": "https://api.github.com/users/mockingbirdz/followers",
"following_url": "https://api.github.com/users/mockingbirdz/following{/other_user}",
"gists_url": "https://api.github.com/users/mockingbirdz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mockingbirdz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mockingbirdz/subscriptions",
"organizations_url": "https://api.github.com/users/mockingbirdz/orgs",
"repos_url": "https://api.github.com/users/mockingbirdz/repos",
"events_url": "https://api.github.com/users/mockingbirdz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mockingbirdz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, unfortunately that link will only be live at the next release (for now squeezeBERT is only in master, so only in the master documentation).\r\n@LysandreJik not sure if there is a way to properly fix this unless we add \"Check all the links in the README to remove the master\" in our release check list.",
"You're right, I don't think there's any other way without over-engineering a feature."
] | 1,602 | 1,602 | 1,602 | NONE | null | The main Readme.md file (https://github.com/huggingface/transformers/blob/master/README.md), the SqueezeBert link (https://huggingface.co/transformers/model_doc/squeezebert.html) gives a "404 - Not Found" Error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7657/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7656/comments | https://api.github.com/repos/huggingface/transformers/issues/7656/events | https://github.com/huggingface/transformers/issues/7656 | 717,169,887 | MDU6SXNzdWU3MTcxNjk4ODc= | 7,656 | T5 Beam search num_beans always equals 1 | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @marcoabrate, \r\n\r\nplease make sure that you use the correct parameter name `num_beams` instead of `num_beans`. \r\nWhen using `num_beams`, I cannot reproduce your error.",
"of course it was that!\r\n\r\nthank you"
] | 1,602 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Debian 10.6
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): N.A.
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. -->
TextGeneration: @TevenLeScao
T5: @patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
1. load T5 model and tokenizer
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
# initialize the model architecture and weights
model = T5ForConditionalGeneration.from_pretrained("t5-base")
tokenizer = T5Tokenizer.from_pretrained("t5-base")
```
2. prepare input for summarization. i guess the error persists for any given generation task but did not try.
```
article = """article etc etc"""
inputs = tokenizer.encode("summarize: " + article,
return_tensors = "pt",
max_length = 512, truncation = True)
```
3. attempt beam search
```
model.config.update({"num_beans": 4})
print(model.config.num_beans)
# output is 4 as expected
outputs = model.generate(inputs,
max_length = 200,
min_length = 100,
length_penalty = 5,
num_return_sequences = 2,
early_stopping = True)
```
or
```
outputs = model.generate(inputs,
max_length = 200,
min_length = 100,
length_penalty = 5,
num_beams = 4,
num_return_sequences = 2,
early_stopping = True)
```
error:
> AssertionError: Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1
as if num_beans == 1, but we set num_beans to 4.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
the generate function should execute beam search with 4 beams without errors
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7656/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7655/comments | https://api.github.com/repos/huggingface/transformers/issues/7655/events | https://github.com/huggingface/transformers/issues/7655 | 717,086,464 | MDU6SXNzdWU3MTcwODY0NjQ= | 7,655 | Eval_loss in prediction is very high : transformers/examples/token-classification/run_ner.py | {
"login": "priyaradhakrishnan0",
"id": 5978979,
"node_id": "MDQ6VXNlcjU5Nzg5Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5978979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/priyaradhakrishnan0",
"html_url": "https://github.com/priyaradhakrishnan0",
"followers_url": "https://api.github.com/users/priyaradhakrishnan0/followers",
"following_url": "https://api.github.com/users/priyaradhakrishnan0/following{/other_user}",
"gists_url": "https://api.github.com/users/priyaradhakrishnan0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/priyaradhakrishnan0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/priyaradhakrishnan0/subscriptions",
"organizations_url": "https://api.github.com/users/priyaradhakrishnan0/orgs",
"repos_url": "https://api.github.com/users/priyaradhakrishnan0/repos",
"events_url": "https://api.github.com/users/priyaradhakrishnan0/events{/privacy}",
"received_events_url": "https://api.github.com/users/priyaradhakrishnan0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any updates?!? On training, it is very low and on the evaluation its insanely high, I tried to check if the hyper parameters were the problem, but I didn't find anything.",
"No. The eval_loss is low. No updates.",
"I was reading more about the metrics and it looks like that for a specific task they place a specific meaning, e.g I'm doing multi-label classification so these are the [metrics](https://simpletransformers.ai/docs/classification-models/#evaluating-a-classification-model):\r\n\r\n**LRAP**\r\n\r\nLabel ranking average precision.\r\n\r\nLabel ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score.\r\n\r\nThe obtained score is always strictly greater than 0 and the best value is 1.\r\n\r\n**Evaluation Loss**\r\n\r\nBinary Cross Entropy Loss.\r\n\r\nIt is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every NN output vector component is not affected by other component values.\r\n\r\nCross-entropy loss awards lower loss to predictions which are closer to the class label. The accuracy, on the other hand, is a binary true/false for a particular sample. That is, Loss here is a continuous variable i.e. it's best when predictions are close to 1 (for true labels) and close to 0 (for false ones).\r\n\r\nTheoretically, the output of the model is not wrong, but the interpretation is.\r\n\r\n- References\r\n\r\n[Understanding Categorical Cross-Entropy Loss, Binary Cross-Entropy Loss, Softmax Loss, Logistic Loss, Focal Loss and all those confusing names](https://gombru.github.io/2018/05/23/cross_entropy_loss/#:~:text=is%20available%20here-,Binary%20Cross%2DEntropy%20Loss,affected%20by%20other%20component%20values.)\r\n\r\n[Loss vs Accuracy](https://kharshit.github.io/blog/2018/12/07/loss-vs-accuracy#:~:text=Cross%2Dentropy%20loss%20awards%20lower,0%20(for%20false%20ones).)",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,602 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
examples/token-classification: @stefan-it
## Information
I am using NER "Emerging and Rare Entities task: WNUT’17 (English NER) dataset"
I am executing the steps as prescribed in https://github.com/huggingface/transformers/tree/08ba4b4902df5a18f5ad41d9490c50fe0a4c970f/examples/token-classification
The problem arises when using prediction:
* [ ] the official example script: wnut_17.json
{
"data_dir": "/home/priya/data_wnut_17",
"labels": "/home/priya/data_wnut_17/labels.txt",
"model_name_or_path": "bert-large-cased",
"output_dir": "wnut-17-model-1",
"max_seq_length": 128,
"num_train_epochs": 3,
"per_device_train_batch_size": 16,
"save_steps": 425,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true,
"fp16": false
}
* [ ] my own modified scripts: wnut_17_mod.json
{
"data_dir": "/home/priya/data_wnut_17",
"labels": "/home/priya/data_wnut_17/labels.txt",
"model_name_or_path": "bert-large-cased",
"output_dir": "wnut-17-model-1",
"max_seq_length": 128,
"num_train_epochs": 3,
"per_device_train_batch_size": 16,
"save_steps": 425,
"seed": 1,
"do_train": **false,**
"do_eval": **false**,
"do_predict": true,
"fp16": false,
"overwrite_output_dir":false
}
The tasks I am working on is:
* [ ] re-run WNUT’17 dataset.
My end-goal is to identify abbreviation and explanation from sentences (labels B-abbr, I-abbr, B-expl, I-expl and O). For the example sentence
> Here GAAP stands for Generally accepted accounting principles
we should get token classified as
> Here O
> GAAP B-abbr
> stands O
> for O
> Generally B-expl
> accepted I-expl
> accounting I-expl
> principles I-expl
## To reproduce
Steps to reproduce the behavior:
1.python run_ner.py wnut_17.json
Prediction: 100%|████████████████████████████████████████████████████████| 162/162 [01:08<00:00, 2.38it/s]
10/06/2020 07:21:15 - INFO - __main__ - eval_loss = 0.2851179020827574
10/06/2020 07:21:15 - INFO - __main__ - eval_accuracy_score = 0.9511413182867402
10/06/2020 07:21:15 - INFO - __main__ - eval_precision = 0.5997392438070405
10/06/2020 07:21:15 - INFO - __main__ - eval_recall = 0.4263206672845227
10/06/2020 07:21:15 - INFO - __main__ - eval_f1 = 0.49837486457204777
2.python run_ner.py wnut_17_mod.json
Prediction: 100%|████████████████████████████████████████████████████████| 162/162 [01:08<00:00, 2.38it/s]
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1175: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead.
warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning)
10/06/2020 08:30:41 - INFO - __main__ - eval_loss = 2.827890293463151
10/06/2020 08:30:41 - INFO - __main__ - eval_accuracy_score = 0.016072497221509788
10/06/2020 08:30:41 - INFO - __main__ - eval_precision = 0.0065180614986565825
10/06/2020 08:30:41 - INFO - __main__ - eval_recall = 0.12140871177015755
10/06/2020 08:30:41 - INFO - __main__ - eval_f1 = 0.012371912924399112
## Expected behavior
I am seeing a 10-fold increase in eval-loss from 0.28 to 2.8.
Other than the changes in wnut_17_mod.json , I have done no other changes. Please advice how to achieve the published eval_loss and performance.
Thanks,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7655/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7654/comments | https://api.github.com/repos/huggingface/transformers/issues/7654/events | https://github.com/huggingface/transformers/issues/7654 | 717,046,015 | MDU6SXNzdWU3MTcwNDYwMTU= | 7,654 | output probabilities of generated sequences in generate function | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Duplicate of https://github.com/huggingface/transformers/issues/3891",
"Actually, those issues are different and we should probably provide this functionality!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,602 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
output probabilities of generated sequences in generate function (generation utils)
thank you so much! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7654/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7654/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7653/comments | https://api.github.com/repos/huggingface/transformers/issues/7653/events | https://github.com/huggingface/transformers/pull/7653 | 716,969,454 | MDExOlB1bGxSZXF1ZXN0NDk5NjE0Mjg1 | 7,653 | [pseudolabels] cleanup markdown table | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7653",
"html_url": "https://github.com/huggingface/transformers/pull/7653",
"diff_url": "https://github.com/huggingface/transformers/pull/7653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7653.patch",
"merged_at": 1602126259000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7652/comments | https://api.github.com/repos/huggingface/transformers/issues/7652/events | https://github.com/huggingface/transformers/pull/7652 | 716,956,136 | MDExOlB1bGxSZXF1ZXN0NDk5NjAzNjE3 | 7,652 | Fix 3 failing slow bart/blender tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think the change can possibly harm, so I will merge without review. cc @sgugger @LysandreJik "
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | 3 of these were simple fixes.
+ 1 typo in blenderbot
+ 2 BART failures changed by new `assert_tensors_close` helper fn checking shapes more aggressively. Output shapes have not changed.
The fourth failure is a bit harder to verify
Blenderbot 3b was OOMing
before fix: 11.4 GB
After: 6.4 GB
Why: going to fp16 before going to cuda | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7652/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7652",
"html_url": "https://github.com/huggingface/transformers/pull/7652",
"diff_url": "https://github.com/huggingface/transformers/pull/7652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7652.patch",
"merged_at": 1602122703000
} |
https://api.github.com/repos/huggingface/transformers/issues/7651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7651/comments | https://api.github.com/repos/huggingface/transformers/issues/7651/events | https://github.com/huggingface/transformers/issues/7651 | 716,951,348 | MDU6SXNzdWU3MTY5NTEzNDg= | 7,651 | Fix Failing Slow tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | ```
FAILED tests/test_modeling_bart.py::BartHeadTests::test_tokenization - Assert...
FAILED tests/test_modeling_bart.py::BartModelIntegrationTests::test_mnli_inference
FAILED tests/test_modeling_blenderbot.py::Blenderbot3BIntegrationTests::test_generation_from_short_input_same_as_parlai_3B
FAILED tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_long_input
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7651/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7650/comments | https://api.github.com/repos/huggingface/transformers/issues/7650/events | https://github.com/huggingface/transformers/pull/7650 | 716,910,916 | MDExOlB1bGxSZXF1ZXN0NDk5NTY2NTc3 | 7,650 | Import integration libraries first | {
"login": "dsblank",
"id": 168568,
"node_id": "MDQ6VXNlcjE2ODU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/168568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsblank",
"html_url": "https://github.com/dsblank",
"followers_url": "https://api.github.com/users/dsblank/followers",
"following_url": "https://api.github.com/users/dsblank/following{/other_user}",
"gists_url": "https://api.github.com/users/dsblank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsblank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsblank/subscriptions",
"organizations_url": "https://api.github.com/users/dsblank/orgs",
"repos_url": "https://api.github.com/users/dsblank/repos",
"events_url": "https://api.github.com/users/dsblank/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsblank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
This PR restores the order of importing 3rd-party integrations before other ML frameworks, and before any other transformer modules.
## Before PR:
* importing comet_ml later causes an error
## After PR:
* using comet_ml functionality is restored | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7650",
"html_url": "https://github.com/huggingface/transformers/pull/7650",
"diff_url": "https://github.com/huggingface/transformers/pull/7650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7650.patch",
"merged_at": 1602260003000
} |
https://api.github.com/repos/huggingface/transformers/issues/7649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7649/comments | https://api.github.com/repos/huggingface/transformers/issues/7649/events | https://github.com/huggingface/transformers/issues/7649 | 716,894,798 | MDU6SXNzdWU3MTY4OTQ3OTg= | 7,649 | setup of Trainer class for distributed trainning | {
"login": "FTD007",
"id": 14077015,
"node_id": "MDQ6VXNlcjE0MDc3MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/14077015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FTD007",
"html_url": "https://github.com/FTD007",
"followers_url": "https://api.github.com/users/FTD007/followers",
"following_url": "https://api.github.com/users/FTD007/following{/other_user}",
"gists_url": "https://api.github.com/users/FTD007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FTD007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FTD007/subscriptions",
"organizations_url": "https://api.github.com/users/FTD007/orgs",
"repos_url": "https://api.github.com/users/FTD007/repos",
"events_url": "https://api.github.com/users/FTD007/events{/privacy}",
"received_events_url": "https://api.github.com/users/FTD007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We prefer to use the [forum](https://discuss.huggingface.co/) for questions like this. The class `HFArgumentParser` is there to help parse the arguments received by your script and pass them along to `Trainer`. Look at the [run_glue script](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) for an example of use. You should then be able to use your script with `torch.distributed.launch`.",
"I also have same problem. Are you slove this problem? Can you tell the right way to train the model on multi-gpu, just one machine. Thanks.",
"> I also have same problem. Are you slove this problem? Can you tell the right way to train the model on multi-gpu, just one machine. Thanks.\r\n\r\nare u using K80 gpu? I found K80 likely have communication problem which does not have a easy fix.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,609 | 1,609 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I am running on the sample code and got confused on how to setup a distributed trainning, below are the code I used
from pathlib import Path
`from tokenizers import ByteLevelBPETokenizer
from tokenizers.implementations import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
tokenizer = ByteLevelBPETokenizer(
"./EsperBERTo/vocab.json",
"./EsperBERTo/merges.txt",
)
tokenizer.enable_truncation(max_length=512)
import torch
torch.cuda.is_available()
from transformers import RobertaConfig
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512)
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM(config=config)
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./oscar.eo.txt",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./EsperBERTo",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
fp16=True,
local_rank=3,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
trainer.save_model("./EsperBERTo")`
I want to know how to set the local_rank parameter in trainer class and what comman should I use.
python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank=3 --master_addr="192.168.1.1" --master_port=1234 starttrans2.py
is above a correct way to run this script if I want to run on a single machine with 4 gpus?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7648/comments | https://api.github.com/repos/huggingface/transformers/issues/7648/events | https://github.com/huggingface/transformers/issues/7648 | 716,813,052 | MDU6SXNzdWU3MTY4MTMwNTI= | 7,648 | does tokenizer support emoji? | {
"login": "steveguang",
"id": 9006809,
"node_id": "MDQ6VXNlcjkwMDY4MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9006809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steveguang",
"html_url": "https://github.com/steveguang",
"followers_url": "https://api.github.com/users/steveguang/followers",
"following_url": "https://api.github.com/users/steveguang/following{/other_user}",
"gists_url": "https://api.github.com/users/steveguang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steveguang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steveguang/subscriptions",
"organizations_url": "https://api.github.com/users/steveguang/orgs",
"repos_url": "https://api.github.com/users/steveguang/repos",
"events_url": "https://api.github.com/users/steveguang/events{/privacy}",
"received_events_url": "https://api.github.com/users/steveguang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The tokenizer you're using (`bert-base-uncased`) was not trained with emojis, therefore it cannot tokenize them correctly. You should add this token to the tokenizer vocabulary:\r\n```py\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)\r\ns =\" 😃 hello how are you\"\r\n\r\ntokenizer.add_tokens(\"😃\")\r\nprint(tokenizer.tokenize(s))\r\n\r\n# ['😃', 'hello', 'how', 'are', 'you']\r\n```\r\n\r\nPlease be aware that the model you're using should have its embedding matrix updated to include the embedding for the new token added. You can see the [documentation here](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_token#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens), here's how you should update your model embedding matrix:\r\n```py\r\n# Let's see how to increase the vocabulary of Bert model and tokenizer\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\n\r\nnum_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])\r\nprint('We have added', num_added_toks, 'tokens')\r\n # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```",
"Thanks, @LysandreJik ! I have another question. When I train using tweets, since there is a lot of noise, a tweet like 'This is soooo good' would be a problem for BERT tokenizer cuz \"soooo\" is not in the vocabulary. Is there a method to add all of them? Right now I am thinking about a kinda ugly way, just use nltk tweettokenizer to process all tweets and add to vocab with words, emoji, etc that appear frequently",
"Hi @steveguang, sentences like `This is soooo good` actually won't be a problem for the BERT tokenizer, as it can decompose the word `soooo` in multiple tokens:\r\n\r\n```py\r\n>>> from transformers import BertTokenizer\r\n>>> tokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n>>> tokenizer.tokenize(\"This is soooo good\")\r\n['This', 'is', 'so', '##oo', '##o', 'good']\r\n```\r\n\r\nHowever, when working with a dataset that seems to have a lot of unknown tokens, it is generally a good idea to identify the tokens that come up relatively often and to add them to your tokenizer. A good example would be the emojis mentioned above, as these are an important attribute to the meaning of the sentence."
] | 1,602 | 1,602 | 1,602 | NONE | null | Hi, I have the code below and it always encodes emoji as "unk". Can someone tell me what I should do? Thanks
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
s =" 😃 hello how are you"
tokenizer.tokenize(s)
['[UNK]', 'hello', 'how', 'are', 'you'] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7648/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7647/comments | https://api.github.com/repos/huggingface/transformers/issues/7647/events | https://github.com/huggingface/transformers/issues/7647 | 716,800,235 | MDU6SXNzdWU3MTY4MDAyMzU= | 7,647 | Project: Gather summarization datasets and try to replicate pegasus results on them | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 2368374212,
"node_id": "MDU6TGFiZWwyMzY4Mzc0MjEy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/pegasus",
"name": "pegasus",
"color": "1f76a8",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"yes, please",
"I could work on getting the datsets, replicating will be hard (compute!!!). I have shared wikihow and arxiv on forum",
"I will start working on this over the next few days, so let's not duplicate the efforts and claim here which ones we are working on.",
"@stas00 \r\n\r\nfollowing remaining datsets are available in the `datsets` lib\r\n```\r\n- multi_news\r\n- reddit_tifu\r\n- billsum\r\n- aeslc\r\n```\r\n\r\ncould write a script to download and process these",
"Do you mean to say that these 4 you listed are already in hf's `datasets`, and so we only need to download and convert these, right?\r\n\r\nSo the others that you haven't listed and Sam hasn't already processed still need to be sorted out from scratch, correct?\r\n\r\nMy plan was to start with `wikihow` as you shared some instructions at https://discuss.huggingface.co/t/wikihow-dataset-preprocessing/1413",
"> And so we only need to download and convert these, right?\r\n\r\nYes, these 4 are already in hf's `datasets`, we just convert and do some pre-processing before, \r\n\r\nI have shared arxiv as well but that needs to be pre-processed.\r\n\r\nfor `newsroom` we need to request it from the author, so I'm not sure if we are allowed to share it directly.",
"If it's very heavy compute+disc-space-wise we could write scripts for small samples and then ask Sam or somebody at HF to run on the full data - since they probably have access to better hardware than us.",
"`arxiv` is huge (3.9 GB something), rest we can handle on colab I guess",
"OK, I will start with `wikihow` and in parallel will inquire w/ the author of `newsroom` wrt permission, since the latter could take time. \r\n\r\nAnd then do `arxiv` afterwards.\r\n\r\nSo do you want to work on the 4 you listed, meanwhile? Either way works for me so please don't hesitate to choose what works the best for you.",
"Yes, I'll take those 4 :)",
"`newsroom` can also be consumed through `datsets` but needs manual download",
"yes, I was just looking at https://huggingface.co/datasets/newsroom but the information is wrong:\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"newsroom\")\r\n```\r\n```\r\nDownloading: 5.21kB [00:00, 1.45MB/s]\r\nDownloading: 2.68kB [00:00, 844kB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset newsroom/default (download: Unknown size, generated: 4.94 GiB, post-processed: Unknown size, total: 4.94 GiB) to /home/stas/.cache/huggingface/datasets/newsroom/default/1.0.0/4b405ccd64e15f685065870ea563a1e6a034d1bd269a5427f40146d81549095e...\r\nTraceback (most recent call last):\r\n File \"x\", line 3, in <module>\r\n dataset = load_dataset(\"newsroom\")\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py\", line 453, in download_and_prepare\r\n assert (\r\nAssertionError: The dataset newsroom with config default requires manual data.\r\n Please follow the manual download instructions: You should download the dataset from http://lil.datasets.cornell.edu/newsroom/\r\n The webpage requires registration.\r\n To unzip the .tar file run `tar -zxvf complete.tar`. To unzip the .gz files\r\n run `gunzip train.json.gz` , ...\r\n After downloading, please put the files under the following names\r\n dev.jsonl, test.jsonl and train.jsonl in a dir of your choice,\r\n which will be used as a manual_dir, e.g. `~/.manual_dirs/newsroom`\r\n Newsroom can then be loaded via:\r\n `datasets.load_dataset(\"newsroom\", data_dir=\"~/.manual_dirs/newsroom\")`.\r\n .\r\n Manual data can be loaded with `datasets.load_dataset(newsroom, data_dir='<path/to/manual/data>')\r\n```\r\n\r\nNo such thing as http://lil.datasets.cornell.edu/newsroom/ - getting 404.\r\n\r\nThis is not the first bogus dataset in `datasets`.\r\n",
"We need to request it from here http://lil.nlp.cornell.edu/newsroom/download/index.html\r\n\r\n",
"Geesh, this one \r\nhttps://github.com/lil-lab/newsroom\r\nalso links to 404\r\nhttps://summari.es/download/",
"Hmm, it looks that perhaps somebody at HF should file this form then, correct?\r\nhttp://lil.nlp.cornell.edu/newsroom/download/index.html -> https://cornell.qualtrics.com/jfe/form/SV_6YA3HQ2p75XH4IR\r\nWe can't use our names to ask for a permission for the dataset to be used by an open source project.\r\n@sshleifer?",
"scraping newsroom is hard! Better to request it.\r\n\r\nI had requested it, I got the link after a month and by the time I saw the mail it was already expired 😂 \r\n\r\nSo, it would be better if someone from HF requests it, they will probably receive it faster",
"We definitely shouldn't scrape it, since we won't be able to use it anyway w/o their permission. So yes, @sshleifer, please help us out here. ",
"Helper scripts for pubmed\r\n\r\nhttps://github.com/armancohan/long-summarization\r\nhttps://github.com/kedz/summarization-datasets",
"here are the results of eval on the wikihow data you shared, @patil-suraj \r\n\r\nThis on dual Titan X:\r\n\r\n* sample of 100, run time: 0:03:05\r\n`{'rouge1': 23.7695, 'rouge2': 5.3349, 'rougeL': 15.6991, 'rougeLsum': 16.7567, 'n_obs': 100, 'seconds_per_sample': 2.433, 'n_gpus': 2}`\r\n* full, run time: 8:19:35\r\n`{'rouge1': 24.6291, 'rouge2': 5.7999, 'rougeL': 15.6812, 'rougeLsum': 16.6907, 'n_obs': 11996, 'seconds_per_sample': 2.505, 'n_gpus': 2}`\r\n\r\nSo that gives us 24.63/5.80/16.69 which is far far away from 46.39/22.12/38.41\r\n\r\nThe command was:\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name google/pegasus-large \\\r\n--save_dir xsum_generations --data_dir /hf/wikihow/wikihow --prefix test --bs 4\r\n```\r\n",
"That's scary low. Do you think there is an issue with dataset ? ",
"@stas00 , @sshleifer \r\nWrote a helper script to download and save summ datasets\r\nhttps://github.com/patil-suraj/summarization_datasets\r\n\r\nCurrently includes `aeslc, billsum and reddit_tifu`, rest should be easy to add.\r\n\r\nProcessing scripts are taken form the official datset repos, split information is copied from the `pegasus` repo.\r\n\r\nEnjoy!",
"@stas00 Try using `google/pegasus-wikihow` as the model can do `--n_obs 100` now that we are calibrated. I should have specified that in the spec. We want to test the fine-tuned model.\r\n\r\nWould also be interested in knowing whether `--max_source_length 512` changes anything.\r\n(You can see the expected params that should be checked into each config [here](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_pegasus.py#L54) In those triples, `length_penalty` and `max_length` are generation params that should be reflected in `model.config`, `max_position_embeddings` should only be reflected in `tokenizer.model_max_length` (didn't save static pos embeddings, I don't think).",
"# google/pegasus-wikihow\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name google/pegasus-wikihow \\\r\n--save_dir xsum_generations --data_dir /hf/wikihow/wikihow --prefix test --n_obs 100 --bs 4\r\n```\r\n\r\n```\r\n{'rouge1': 21.4782, 'rouge2': 8.7003, 'rougeL': 18.9314, 'rougeLsum': 18.8476, 'n_obs': 100, 'seconds_per_sample': 1.1432, 'n_gpus': 2}\r\n```\r\nThere is a slight improvement on all but `rouge1` w/ `google/pegasus-wikihow`\r\n\r\nIt also appears to be much faster!\r\n\r\n\r\nOn 1000 objects the performance drops:\r\n\r\n```\r\n{'rouge1': 20.7939, 'rouge2': 8.4804, 'rougeL': 18.12, 'rougeLsum': 18.0778, 'n_obs': 1000, 'seconds_per_sample': 0.3459, 'n_gpus': 2}\r\n```\r\n\r\nmy intuition tells me that either the dataset has some broken data in it, or all of it has some issues - since we aren't getting above the score from 100 objects.\r\n\r\n\r\n# --max_source_length 512\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name google/pegasus-wikihow \\\r\n--save_dir xsum_generations --data_dir /hf/wikihow/wikihow --prefix test --n_obs 100 --bs 4 \\\r\n--max_source_length 512\r\n```\r\n\r\n```\r\n{'rouge1': 21.5527, 'rouge2': 8.6861, 'rougeL': 18.9145, 'rougeLsum': 18.9772, 'n_obs': 100, 'seconds_per_sample': 0.5674, 'n_gpus': 2}\r\n```\r\n\r\nlooks worse on 2 scores, better on 2 other scores.\r\n",
"> Do you think there is an issue with dataset ?\r\n\r\nI didn't get a chance to study it yet - just had the time to run the eval.",
"need a little script to convert the json dumps into a nice md table so that it's easier to read the results, like `run_eval_search.py` does.",
"`newsroom`: filled out the form\r\n`wikihow`: asked the authors https://github.com/google-research/pegasus/issues/111 if @stas00 could paste 1 article, 1 target and 1 generation as a comment on that issue, it would be helpful.\r\n\r\n`gigaword`: Done",
"@patil-suraj if you have preprocessed links you want me to run evaluate on, feel free to post/slack and I can run eval. My preference would be to gdown/unzip a directory that includes only\r\n\r\n```\r\ndata/test.source\r\ndata/test.target\r\n```",
"I started a sub-section of my porting repo to gather script and instructions for building these datasets:\r\nhttps://github.com/stas00/porting/tree/master/datasets/pegasus\r\n\r\nSo for completed things please either submit a PR or send me the files and I will add them there. Whatever is more efficient for you.\r\n\r\np.s. I'm doing it in a separate repo, since @sshleifer doesn't think they should go into the main repo (I think they should, but this can be fixed later as long as we have them).\r\n",
"Here is a little helper util that helps to show the differences in strings - useful when matching pre-processing data.\r\n\r\n```\r\nimport difflib\r\ndef str_compare(a, b):\r\n \"\"\" \r\n If strings are mismatched, print the diff with context\r\n Returns true if strings match, false otherwise\r\n adapted from https://stackoverflow.com/a/17904977/9201239\r\n \"\"\"\r\n \r\n match = True\r\n if len(a) != len(b):\r\n print(f\"length mismatch: a={len(a)}, b={len(b)}\")\r\n \r\n def context(s, i):\r\n start = i-10\r\n end = i+10\r\n if start < 0: start = 0\r\n if end > len(s)-1: end = len(s)-1\r\n return s[start:end]\r\n \r\n for i, s in enumerate(difflib.ndiff(a, b)):\r\n if s[0] == ' ': \r\n continue \r\n elif s[0] == '-':\r\n match = False\r\n print(f'Delete \"{s[-1]}\" from position {i}, ctx=[{context(a, i)}]')\r\n elif s[0] == '+':\r\n match = False\r\n print(f'Add \"{s[-1]}\" to position {i}, ctx=[{context(a, i)}')\r\n \r\n return match\r\n```",
"I'm trying to reproduce the multi-news results. But it seems the ROUGE scores are not even in the ballpark of the original report or the ones in [here](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit#gid=0).\r\n\r\nThe command I used was \r\n`python -m torch.distributed.launch --nproc_per_node=4 run_distributed_eval.py --model_name google/pegasus-multi_news --data_dir multi-news/processed/hf/ --save_dir output_data/ --bs 6`\r\n\r\n`{\"rouge1\": 44.7752, \"rouge2\": 16.1437, \"rougeL\": 22.7593, \"rougeLsum\": 40.5531, \"n_obs\": 5622, \"seconds_per_sample\": 0.6931, \"n_gpus\": 4}`\r\n\r\nI downloaded the data from the original authors of Multi-News: [link](https://drive.google.com/drive/folders/1qZ3zJBv0zrUy4HVWxnx33IsrHGimXLPy). \r\n\r\nI'm not sure if the discrepancy is due to the preprocessing, but to my understanding, pegasus only replaces `NEWLINE_CHAR` with `\\n`. Could someone give some hints?"
] | 1,602 | 1,623 | 1,603 | CONTRIBUTOR | null | Dear @stas00 and whoever else is willing to help!
So far I have only checked pegasus' rouge scores on 2/12 datasets for which we have checkpoints.
For the other 10 datasets I either haven't tried or have tried briefly and gotten stuck.
The full scope of the project is that:
for each dataset:
1) There is an automated way to download the data, either from S3 or source. (To the extent possible, much of the logic in this script should eventually live in the `datasets` package).
2) we know our pegasus implementation's rouge score
2b) if our score is very different than the authors', we know whether that difference is due to data preprocessing, and if it is, we can preprocess the dataset similarly to the pegasus authors.
3) Our rouge score is within 0.3 Rouge2 of the reported. (Authors) column below.
### Steps
#### Getting Data
By far the most difficult part of each project is getting the dataset. And giving up quickly if you can't and writing a github issue somewhere.
I tried 1 approach to getting data: [this script](https://gist.github.com/sshleifer/c4aed7bf4418b50caee731e94be05d9f)
It worked for gigaword, I just haven't done the evaluation, but it failed for `aeslc` and then I gave up.
Another complementary approach would be to try to directly use the [pegasus dataset code](https://github.com/google-research/pegasus/blob/master/pegasus/data/public_supervised_datasets.py)
This will likely push preprocessing issues towards the back of the project. (when we try to send PRs to the datasets repo), but might be better than using my script.
#### After you get data
When you have gotten a dataset you can sanity check
```bash
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py \
--model_name google/pegasus-large $\ # see note 1
--save_dir xsum_generations \
--data_dir xsum \
--prefix test \
--n_obs 100 \
```
Note 1: you can just keep running pegasus-large and expect a high single digits or better rouge2 score,to avoid downloading all the checkpoints or, you can change this to the relevant checkpoint.
Note 2: I am happy to run all the evals on newer hardware, very easy for me.
Note 3: We can do data sharing by getting you aws creds, or some other solution. Key is that I can download from command line, e.g. Google Drive + gdown.
### Misc thoughts:
+ arxiv and pubmed are listed under `scientific_papers` in the datasets package.
+ This is really 10 projects (1 each dataset, 2 of which I've started). If I were you I would ignore the started 2 and start on a few other ones.
+ If a dataset only has train/test or train/val or some other splits, see how the pegasus authors did the split.
+ Partial credit is valuable!
+ this could easily have been an issue for the datasets project rather than the transformers project.
+ There is no reason to merge PRs quickly for this project, but eventually we want a (much better) download_summ_dataset.py script or instructions for using other libs to accomplish the same outcome.
+ Will be good for both of us to learn the datasets internals.
+ Raw Billsum has multiple line articles, which breaks everything :( , (we could try to support raw nlp datasets in our `DataLoader`)
Here is a copy of the table we are trying to fill out in #6844 : (I made a new issue to avoid spamming that one)
| dataset | Authors| This Repo|
| ---- | ----|----|
| xsum | 47.60/24.83/39.64| 46.87/24.46/39.15|
| cnn_dailymail | 44.16/21.56/41.30| see 1|
| newsroom | 45.07/33.39/41.28 | have `.tar` file|
| multi_news | 47.65/18.75/24.95|
| gigaword | 39.65/20.47/36.76| 39.79/20.56/36.80|
| wikihow | 46.39/22.12/38.41 *| Asked Authors |
| reddit_tifu | 27.99/9.81/22.94|32.75/11.68/24.97|
| big_patent |52.29/33.08/41.66 *| |
| arxiv | 44.21/16.95/25.67| |
| pubmed | 45.97/20.15/28.25| |
| aeslc | 37.68/21.25/36.51|37.1/21.4/35.94|
| billsum | 59.67/41.58/47.59|54.99/37.43/43.07|
Originally from mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update)
This was really long, and probably disorganized, so feel free to ask clarifying questions here or on slack!
cc @stas00
1) I got similar scores on cnn-dailymail by finetuning the authors' model on our dataset for a bit.
2) reddit_tifu: added `--min_length 32` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7646/comments | https://api.github.com/repos/huggingface/transformers/issues/7646/events | https://github.com/huggingface/transformers/pull/7646 | 716,770,932 | MDExOlB1bGxSZXF1ZXN0NDk5NDQ5OTY3 | 7,646 | Openai gpt for classification | {
"login": "fmcurti",
"id": 7762516,
"node_id": "MDQ6VXNlcjc3NjI1MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7762516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmcurti",
"html_url": "https://github.com/fmcurti",
"followers_url": "https://api.github.com/users/fmcurti/followers",
"following_url": "https://api.github.com/users/fmcurti/following{/other_user}",
"gists_url": "https://api.github.com/users/fmcurti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fmcurti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fmcurti/subscriptions",
"organizations_url": "https://api.github.com/users/fmcurti/orgs",
"repos_url": "https://api.github.com/users/fmcurti/repos",
"events_url": "https://api.github.com/users/fmcurti/events{/privacy}",
"received_events_url": "https://api.github.com/users/fmcurti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Hello! Thanks a lot for opening this PR. It seems that in the process, you ran a merge that went somewhat unexpectedly, as there's now 37 files changes and a +1960/-40 diff, which makes it impossible to review. Do you mind opening a new PR with only your commits, so that we can review it?",
"> Hello! Thanks a lot for opening this PR. It seems that in the process, you ran a merge that went somewhat unexpectedly, as there's now 37 files changes and a +1960/-40 diff, which makes it impossible to review. Do you mind opening a new PR with only your commits, so that we can review it?\r\n\r\nYes of course, I'll close this one and open a new PR"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
Adds sequence classification architecture for GPT-1,
Strongly based on modifications made in #7501
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7623 (issue) (Partially)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [✓] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [✓] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [✓] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [✓] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7646/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7646/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7646",
"html_url": "https://github.com/huggingface/transformers/pull/7646",
"diff_url": "https://github.com/huggingface/transformers/pull/7646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7646.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7645/comments | https://api.github.com/repos/huggingface/transformers/issues/7645/events | https://github.com/huggingface/transformers/pull/7645 | 716,755,445 | MDExOlB1bGxSZXF1ZXN0NDk5NDM3MDk3 | 7,645 | Fix integration tests of DeBERTa | {
"login": "BigBird01",
"id": 38195654,
"node_id": "MDQ6VXNlcjM4MTk1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/38195654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigBird01",
"html_url": "https://github.com/BigBird01",
"followers_url": "https://api.github.com/users/BigBird01/followers",
"following_url": "https://api.github.com/users/BigBird01/following{/other_user}",
"gists_url": "https://api.github.com/users/BigBird01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigBird01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigBird01/subscriptions",
"organizations_url": "https://api.github.com/users/BigBird01/orgs",
"repos_url": "https://api.github.com/users/BigBird01/repos",
"events_url": "https://api.github.com/users/BigBird01/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigBird01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik I just fix the numeric part of the tests. Another issue is that I just made the change to the model state keys, i.e. change bert.encoder to deberta.encoder. However, I can only upload the model to **DeBERTa/deberta-base, DeBERTa/deberta-large**. Could you help to mv those two model to the namespace of **microsoft**? Or could you add me to the organization **Microsoft**?",
"Hi! Sure, I can add you to the `microsoft` organization. What's your username on the hub? Thanks!",
"I'm uploading the two models with the modified names `bert` -> `deberta` right now.",
"> Hi! Sure, I can add you to the `microsoft` organization. What's your username on the hub? Thanks!\r\n\r\nThe name is **_DeBERTa_**",
"Cool, I'm adding you! I've done a PR here #7229 that solves all the integration tests. Do you mind reviewing it before we merge it? I've added comments to explain why the changes were so.",
"> > Hi! Sure, I can add you to the `microsoft` organization. What's your username on the hub? Thanks!\r\n> \r\n> The name is **_DeBERTa_**\r\n\r\nHi, @LysandreJik \r\n\r\nDid you add me **DeBERTa* to `microsoft`? I still can't see my account under `Microsoft`.\r\nSeems the model you uploaded to `Microsoft/deberta-base` and `Microsoft/deberta-large` is not loadable due to a format issue. \r\n",
"I've added you manually @BigBird01, but you should have been able to request to join from the website – was this not the case?",
"@BigBird01, what's the issue you have? I can load both:\r\n```py\r\n>>> from transformers import DebertaModel\r\n>>> model = DebertaModel.from_pretrained(\"microsoft/deberta-base\")\r\nDownloading: 100%|██████████| 448/448 [00:00<00:00, 510kB/s]\r\nDownloading: 100%|██████████| 559M/559M [00:50<00:00, 11.1MB/s]\r\nSome weights of the model checkpoint at microsoft/deberta-base were not used when initializing DebertaModel: ['deberta.embeddings.position_embeddings.weight']\r\n- This IS expected if you are initializing DebertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing DebertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n>>> model = DebertaModel.from_pretrained(\"microsoft/deberta-large\")\r\nDownloading: 100%|██████████| 449/449 [00:00<00:00, 578kB/s]\r\nDownloading: 100%|██████████| 1.63G/1.63G [02:42<00:00, 9.98MB/s]\r\nSome weights of the model checkpoint at microsoft/deberta-large were not used when initializing DebertaModel: ['deberta.embeddings.position_embeddings.weight']\r\n- This IS expected if you are initializing DebertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing DebertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n\r\n```"
] | 1,602 | 1,612 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7645/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7645",
"html_url": "https://github.com/huggingface/transformers/pull/7645",
"diff_url": "https://github.com/huggingface/transformers/pull/7645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7645.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7644/comments | https://api.github.com/repos/huggingface/transformers/issues/7644/events | https://github.com/huggingface/transformers/issues/7644 | 716,750,775 | MDU6SXNzdWU3MTY3NTA3NzU= | 7,644 | NER pipeline documentation example failing | {
"login": "r-maheshh",
"id": 72519990,
"node_id": "MDQ6VXNlcjcyNTE5OTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/72519990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r-maheshh",
"html_url": "https://github.com/r-maheshh",
"followers_url": "https://api.github.com/users/r-maheshh/followers",
"following_url": "https://api.github.com/users/r-maheshh/following{/other_user}",
"gists_url": "https://api.github.com/users/r-maheshh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r-maheshh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r-maheshh/subscriptions",
"organizations_url": "https://api.github.com/users/r-maheshh/orgs",
"repos_url": "https://api.github.com/users/r-maheshh/repos",
"events_url": "https://api.github.com/users/r-maheshh/events{/privacy}",
"received_events_url": "https://api.github.com/users/r-maheshh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"How many labels does your model have? You can see with `print(model.config.num_labels)`. If it's larger than the length of your `label_list`, that could result in an index out of range error.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | Hello,
I am running the code through your documentation for named entity recognition and am trying to save this "ner" model locally:
https://huggingface.co/transformers/usage.html#named-entity-recognition
```
nlp = pipeline("ner")
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge which is visible from the window."
nlp.save_pretrained("path to folder")
```
When going to load this model up and make predictions, I am getting the error: "IndexError: list index out of range" pointing the very last line below:
```
model = AutoModelForTokenClassification.from_pretrained("path to folder")
tokenizer = AutoTokenizer.from_pretrained("path to folder")
label_list = [
"O", # Outside of a named entity
"B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity
"I-MISC", # Miscellaneous entity
"B-PER", # Beginning of a person's name right after another person's name
"I-PER", # Person's name
"B-ORG", # Beginning of an organisation right after another organisation
"I-ORG", # Organisation
"B-LOC", # Beginning of a location right after another location
"I-LOC" # Location
]
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge."
# Bit of a hack to get the tokens with the special tokens
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="pt")
outputs = model(inputs)[0]
predictions = torch.argmax(outputs, dim=2)
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])
```
I would like to get the entity for each token. I believe that the error is with "label_list" portion of the code, and I ran the following this has the token along with the prediction represented as integers:
`print([(token,prediction) for token, prediction in zip(tokens, predictions[0].tolist())])`
I am unable to recreate the output shown on the website due to that error. Any help would be much appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7644/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7643/comments | https://api.github.com/repos/huggingface/transformers/issues/7643/events | https://github.com/huggingface/transformers/issues/7643 | 716,717,914 | MDU6SXNzdWU3MTY3MTc5MTQ= | 7,643 | quick question about `BertForMaskedLM` | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This has been fixed already but is only visible in the master documentation: see [here](https://huggingface.co/transformers/master/model_doc/bert.html#bertformaskedlm). The documentation that is shown by default corresponds to the last release and the fix in the docstrings has been done since then :-)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | Hello,
I have a question about the example code that can be found in the documentation for `BertForMaskedLM` model. The example from the documentation is shown below:
```python
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True)
input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt")["input_ids"]
outputs = model(input_ids, labels=input_ids)
loss = outputs.loss
prediction_logits = outputs.logits
```
In this example, from the input string "Hello, my dog is cute", I don't see any `mask_token` in it. Also, the example code simply passes the label as `label=input_ids`.
So in this particular example, how exactly does the `BertForMaskedLM` model calculate the masked-LM loss (since the `mask_token` is not specified in the input string)? when I simply pass `labels = input_ids`, does `BertForMaskedLM` model automatically place the `mask_token` over the the first token of the input string (or something similar to this)?
I don't think that the code provided in the documentation is wrong, because when I run the code on my machine, it runs smoothly without generating any error.
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7642/comments | https://api.github.com/repos/huggingface/transformers/issues/7642/events | https://github.com/huggingface/transformers/pull/7642 | 716,662,797 | MDExOlB1bGxSZXF1ZXN0NDk5MzYwNTQ2 | 7,642 | Fix RobertaForCausalLM docs | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | MEMBER | null | `RobertaLMHeadModel` does not exist, and we can't pass the `return_dict` value if a config has already been passed during instantiation.
closes #7635 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7642/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7642",
"html_url": "https://github.com/huggingface/transformers/pull/7642",
"diff_url": "https://github.com/huggingface/transformers/pull/7642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7642.patch",
"merged_at": 1602160561000
} |
https://api.github.com/repos/huggingface/transformers/issues/7641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7641/comments | https://api.github.com/repos/huggingface/transformers/issues/7641/events | https://github.com/huggingface/transformers/pull/7641 | 716,653,642 | MDExOlB1bGxSZXF1ZXN0NDk5MzUyOTAw | 7,641 | [s2s] configure lr_scheduler from command line | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | MEMBER | null | # What does this PR do?
This PR adds the ability to configure `lr_scheduler` from command line for `Seq2SeqTrainer`.
Fixes #7543
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7641/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7641",
"html_url": "https://github.com/huggingface/transformers/pull/7641",
"diff_url": "https://github.com/huggingface/transformers/pull/7641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7641.patch",
"merged_at": 1602176795000
} |
https://api.github.com/repos/huggingface/transformers/issues/7640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7640/comments | https://api.github.com/repos/huggingface/transformers/issues/7640/events | https://github.com/huggingface/transformers/pull/7640 | 716,560,146 | MDExOlB1bGxSZXF1ZXN0NDk5Mjc1MzEz | 7,640 | Create README.md for IsRoBERTa language model | {
"login": "donchev7",
"id": 11960967,
"node_id": "MDQ6VXNlcjExOTYwOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/11960967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donchev7",
"html_url": "https://github.com/donchev7",
"followers_url": "https://api.github.com/users/donchev7/followers",
"following_url": "https://api.github.com/users/donchev7/following{/other_user}",
"gists_url": "https://api.github.com/users/donchev7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donchev7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donchev7/subscriptions",
"organizations_url": "https://api.github.com/users/donchev7/orgs",
"repos_url": "https://api.github.com/users/donchev7/repos",
"events_url": "https://api.github.com/users/donchev7/events{/privacy}",
"received_events_url": "https://api.github.com/users/donchev7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks for sharing! We had a few models already but only for translation: https://huggingface.co/models?filter=is"
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | # What does this PR do?
Adds a model card Readme for the IsRoBERTa language model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7640",
"html_url": "https://github.com/huggingface/transformers/pull/7640",
"diff_url": "https://github.com/huggingface/transformers/pull/7640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7640.patch",
"merged_at": 1602103564000
} |
https://api.github.com/repos/huggingface/transformers/issues/7639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7639/comments | https://api.github.com/repos/huggingface/transformers/issues/7639/events | https://github.com/huggingface/transformers/pull/7639 | 716,549,861 | MDExOlB1bGxSZXF1ZXN0NDk5MjY2ODc1 | 7,639 | [s2s] release pseudolabel links and instructions | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patil-suraj "
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | + Release a bunch of summarization and translation pseudolabels with reasonably nice documentation.
+ Allow `make_student(teacher, 'student_000_baseline', 12, 3, d_layers_to_copy=[0,0,0])` for baseline purposes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7639/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7639/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7639",
"html_url": "https://github.com/huggingface/transformers/pull/7639",
"diff_url": "https://github.com/huggingface/transformers/pull/7639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7639.patch",
"merged_at": 1602084044000
} |
https://api.github.com/repos/huggingface/transformers/issues/7638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7638/comments | https://api.github.com/repos/huggingface/transformers/issues/7638/events | https://github.com/huggingface/transformers/issues/7638 | 716,536,213 | MDU6SXNzdWU3MTY1MzYyMTM= | 7,638 | error AttributeError: 'tuple' object has no attribute 'logits' | {
"login": "TzurV",
"id": 19628509,
"node_id": "MDQ6VXNlcjE5NjI4NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19628509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TzurV",
"html_url": "https://github.com/TzurV",
"followers_url": "https://api.github.com/users/TzurV/followers",
"following_url": "https://api.github.com/users/TzurV/following{/other_user}",
"gists_url": "https://api.github.com/users/TzurV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TzurV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TzurV/subscriptions",
"organizations_url": "https://api.github.com/users/TzurV/orgs",
"repos_url": "https://api.github.com/users/TzurV/repos",
"events_url": "https://api.github.com/users/TzurV/events{/privacy}",
"received_events_url": "https://api.github.com/users/TzurV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you replace:\r\n\r\n```python\r\ntoken_logits = model(input).logits\r\n```\r\nby\r\n\r\n```python\r\ntoken_logits = model(input, return_dict=True).logits\r\n```\r\n\r\nand see if the error persists? ",
"Can you give a link to the example, so that we can fix the code snippet? ",
"Hi Patrick,\n\nThe suggested change fixed the problem.\n\nThank you.\nTzur\n\n\nOn Wed, 7 Oct 2020 at 18:34, Patrick von Platen <[email protected]>\nwrote:\n\n> Can you give a link to the example, so that we can fix the code snippet?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7638#issuecomment-705087242>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEVYDXL4DBQIJO4MPX3QZS3SJSRBXANCNFSM4SHNAGUQ>\n> .\n>\n\n\n-- \nEmail: [email protected]\nHome: +44 (0) 1480 839198\nMobile: +44 (0) 7825 363873\nIsrael: +972 (0) 3 7201013\nAddress:\n23 Old Pinewood Way,\nPapworth Everard\nCambridge CB23 3GT\nUK\n",
"@patrickvonplaten The same issue is on this page https://huggingface.co/transformers/training.html\r\n\r\nIn these 2 lines \r\n`outputs = model(input_ids, attention_mask=attention_mask, labels=labels)`\r\n\r\n`outputs = model(input_ids, attention_mask=attention_mask)`\r\n\r\nThanks for the help!!\r\n",
"I fixed this problem by update transfromers from 3.0.2 to 4.23.1(the latest version in 2022.10.16)",
"import torch\r\nfrom torch.utils.data import DataLoader, TensorDataset\r\nfrom transformers import DistilBertTokenizer, DistilBertForQuestionAnswering, AdamW\r\n\r\n# Assuming you have already preprocessed the questions and answers\r\nquestions = [\r\n \"How do I initiate a policy action?\",\r\n \"What is the difference between policy and procedure?\",\r\n \"On average, how long does it take to complete a policy action?\", \r\n \"What happens to the policy once I submit my draft to UPO?\",\r\n \"Who do I talk to if I have a question about the content of a policy?\", \r\n \"Where do I find the previously approved versions of the policy?\", \r\n \"What is the legal sufficiency review?\", \r\n \"Substantive vs. non-substantive change?\", \r\n \"How do I know when the policy was last updated?\", \r\n \"Are there any rules about posting UNT policy on our departmental web page?\"\r\n] \r\n\r\nanswers = [\r\n \"To initiate a policy action, contact the University Policy Office at [email protected].\", \r\n \"A policy is a governing principle that mandates or constrains actions. Procedures outline the steps necessary to implement the policy.\", \r\n \" \",\r\n \" \",\r\n \" \",\r\n \" \",\r\n \" \",\r\n \" \",\r\n \" \",\r\n \" \",\r\n # Rest of answers\r\n]\r\n\r\n# Make sure questions and answers have the same length\r\nassert len(questions) == len(answers), \"Mismatch in the number of questions and answers\"\r\nlabels = torch.tensor([1] * len(questions)) # Assuming all questions have correct answers\r\n\r\ndef preprocess(questions, answers):\r\n tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\n \r\n # Combine questions and answers into a list of strings\r\n inputs = [f\"{q} {a}\" for q, a in zip(questions, answers)]\r\n \r\n # Tokenize the combined strings\r\n tokenized_inputs = tokenizer(inputs, return_tensors='pt', padding=True, truncation=True)\r\n \r\n # Create tensors for input_ids, attention_mask\r\n input_ids = tokenized_inputs['input_ids']\r\n attention_mask = tokenized_inputs['attention_mask']\r\n \r\n return input_ids, attention_mask\r\n\r\ninput_ids, attention_mask = preprocess(questions, answers)\r\n\r\n# Create a DataLoader\r\ndataset = TensorDataset(input_ids, attention_mask, labels)\r\ndataloader = DataLoader(dataset, batch_size=8, shuffle=True)\r\n\r\n# Modify the labels to represent whether each question has an answer or not\r\n# For simplicity, let's assume 1 represents having an answer and 0 represents not having an answer\r\nlabels = torch.tensor([1 if a.strip() else 0 for a in answers])\r\n\r\n# Update the training loop\r\ndef train(model, dataloader, lr=5e-5, epochs=3):\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n model.to(device)\r\n \r\n optimizer = AdamW(model.parameters(), lr=lr)\r\n criterion = torch.nn.BCEWithLogitsLoss() # Assuming binary classification (answer or not)\r\n \r\n for epoch in range(epochs):\r\n model.train()\r\n total_loss = 0.0\r\n \r\n for batch in dataloader:\r\n input_ids, attention_mask, labels = batch\r\n input_ids, attention_mask, labels = input_ids.to(device), attention_mask.to(device), labels.to(device)\r\n \r\n optimizer.zero_grad()\r\n \r\n outputs = model(input_ids, attention_mask=attention_mask)\r\n logits = outputs.logits\r\n \r\n loss = criterion(logits.squeeze(), labels.float())\r\n total_loss += loss.item()\r\n \r\n loss.backward()\r\n optimizer.step()\r\n \r\n print(f\"Epoch {epoch + 1}, Loss: {total_loss / len(dataloader)}\")\r\n\r\n# Instantiate the model\r\nbert_model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased')\r\n\r\n# Train the model\r\ntrain(bert_model, dataloader)\r\n\r\n# Save the trained model\r\ntorch.save(bert_model.state_dict(), 'saved_model.pth')\r\n\r\n# Load the trained model\r\nloaded_model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased')\r\nloaded_model.load_state_dict(torch.load('saved_model.pth'))\r\nloaded_model.eval()\r\n\r\n# Prediction function\r\ndef get_answer(question, model, tokenizer):\r\n inputs = tokenizer(question, return_tensors='pt')\r\n input_ids = inputs['input_ids']\r\n attention_mask = inputs['attention_mask']\r\n \r\n with torch.no_grad():\r\n outputs = model(input_ids, attention_mask=attention_mask)\r\n \r\n start_logits, end_logits = outputs.start_logits, outputs.end_logits\r\n start_idx = torch.argmax(start_logits)\r\n end_idx = torch.argmax(end_logits)\r\n \r\n answer = tokenizer.decode(input_ids[0, start_idx:end_idx+1], skip_special_tokens=True)\r\n \r\n return answer\r\n\r\n# Example usage\r\nquestion_to_predict = \"How do I initiate a policy action?\"\r\nanswer = get_answer(question_to_predict, loaded_model, DistilBertTokenizer.from_pretrained('distilbert-base-uncased'))\r\nprint(\"Answer:\", answer)\r\n\r\n\r\nfor the above code i am getting the bellow error\r\n\r\nSome weights of DistilBertForQuestionAnswering were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n~\\AppData\\Local\\Temp\\ipykernel_18184\\3727763023.py in <module>\r\n 93 \r\n 94 # Train the model\r\n---> 95 train(bert_model, dataloader)\r\n 96 \r\n 97 # Save the trained model\r\n\r\n~\\AppData\\Local\\Temp\\ipykernel_18184\\3727763023.py in train(model, dataloader, lr, epochs)\r\n 79 \r\n 80 outputs = model(input_ids, attention_mask=attention_mask)\r\n---> 81 logits = outputs.logits\r\n 82 \r\n 83 loss = criterion(logits.squeeze(), labels.float())\r\n\r\nAttributeError: 'QuestionAnsweringModelOutput' object has no attribute 'logits'\r\n",
"你好,我已经收到您的邮件~",
"Can any one fix the error, please?",
"Hey, feel free to check this: https://github.com/younesbelkada/transformers/blob/587b8e6ce3742063e835c33d239a0a400c69d631/src/transformers/modeling_outputs.py#L1046 there is not `logits` key. Adapt your script accordingly 🤗 "
] | 1,602 | 1,701 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
AttributeError Traceback (most recent call last)
<ipython-input-4-594fed3b7299> in <module>()
6 input = tokenizer.encode(sequence, return_tensors="pt")
7 mask_token_index = torch.where(input == tokenizer.mask_token_id)[1]
----> 8 token_logits = model(input).logits
9 mask_token_logits = token_logits[0, mask_token_index, :]
10 top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
AttributeError: 'tuple' object has no attribute 'logits'
- `transformers` version:
Successfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc2 transformers-3.3.1
- Platform:
Masked Language Modeling - Colab PyTorch
- Python version:
Python 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [V ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Summary of the tasks Open Colab Pytorch
2. Masked Language Modeling example
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7637/comments | https://api.github.com/repos/huggingface/transformers/issues/7637/events | https://github.com/huggingface/transformers/issues/7637 | 716,517,196 | MDU6SXNzdWU3MTY1MTcxOTY= | 7,637 | ValueError("The training dataset must have an asserted cardinality") when running run_tf_text_classification.py | {
"login": "pvcastro",
"id": 12713359,
"node_id": "MDQ6VXNlcjEyNzEzMzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pvcastro",
"html_url": "https://github.com/pvcastro",
"followers_url": "https://api.github.com/users/pvcastro/followers",
"following_url": "https://api.github.com/users/pvcastro/following{/other_user}",
"gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions",
"organizations_url": "https://api.github.com/users/pvcastro/orgs",
"repos_url": "https://api.github.com/users/pvcastro/repos",
"events_url": "https://api.github.com/users/pvcastro/events{/privacy}",
"received_events_url": "https://api.github.com/users/pvcastro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello!\r\n\r\nThis is a bug indeed, I will fix it ASAP!\r\n\r\nAbout the issue with forcing `from_pt` to True, you should just give the name of your PT model that finishes with `.bin` and not the folder.",
"Hi @jplu !\r\nRegarding the `from_pt` parameter, so there is no way for me to use the model name which was uploaded to huggingface? I have to download it to my machine and refer to the .bin name?\r\nThere is a problem there, because `AutoConfig.from_pretrained` uses the same parameter and throws an error when we use the .bin path:\r\n```\r\nTraceback (most recent call last):\r\n File \"/media/discoD/repositorios/transformers_pedro/src/transformers/configuration_utils.py\", line 360, in get_config_dict\r\n config_dict = cls._dict_from_json_file(resolved_config_file)\r\n File \"/media/discoD/repositorios/transformers_pedro/src/transformers/configuration_utils.py\", line 442, in _dict_from_json_file\r\n text = reader.read()\r\n File \"/media/discoD/anaconda3/envs/transformers/lib/python3.7/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n```",
"Ok, I will take this as a separate issue. A PR for the cardinality issue will arrive by today.",
"@jplu thanks for the fix!\r\nDid you get to open another issue for this?\r\n\r\n> Ok, I will take this as a separate issue. A PR for the cardinality issue will arrive by today."
] | 1,602 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@jplu
## Information
Model I am using (Bert, XLNet ...): Bert (bert-base-uncased)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SST-2
* [x] my own task or dataset: (give details below)
This same problem happened to my custom dataset, as I described here in #7535 , and also using SST-2 from GLUE (which I did to confirm the error). The following steps are using SST-2 with bert-base-uncased.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
4. Updated datasets to version 1.1.1, as needed according to issue #7535
5. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/dev.csv \
--label_column_id 1 \
--model_name_or_path bert-base-uncased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
10/07/2020 09:48:49 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=1, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct07_09-48-45_user-XPS-8700', logging_first_step=False, logging_steps=10000, save_steps=10000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=10000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/07/2020 09:48:52 - INFO - filelock - Lock 140079222710992 acquired on /home/user/.cache/huggingface/datasets/c19c3494c195b40ef4234cb533a8f3ce0bca75ffcf602cc246c390073e633c46.1d5301eeb143a6a4f6f3a2bf726921db0de85048303426a3810f96d735d50d8a.py.lock
10/07/2020 09:48:52 - INFO - filelock - Lock 140079222710992 released on /home/user/.cache/huggingface/datasets/c19c3494c195b40ef4234cb533a8f3ce0bca75ffcf602cc246c390073e633c46.1d5301eeb143a6a4f6f3a2bf726921db0de85048303426a3810f96d735d50d8a.py.lock
Using custom data configuration default
10/07/2020 09:48:52 - INFO - filelock - Lock 140084305595600 acquired on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
10/07/2020 09:48:52 - INFO - filelock - Lock 140084305595600 released on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
10/07/2020 09:48:52 - INFO - filelock - Lock 140080785346896 acquired on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
Reusing dataset csv (/home/user/.cache/huggingface/datasets/csv/default-477ee137eed7e5ae/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4)
10/07/2020 09:48:52 - INFO - filelock - Lock 140080785346896 released on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
100%|██████████| 68/68 [01:20<00:00, 1.18s/ba]
100%|██████████| 1/1 [00:01<00:00, 1.71s/ba]
100%|██████████| 1/1 [00:01<00:00, 1.44s/ba]
10/07/2020 09:50:23 - INFO - filelock - Lock 140078150630032 acquired on /home/user/.cache/torch/transformers/336363d3718f8cc6432db4a768a053f96a9eae064c8c96aff2bc69fe73929770.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5.lock
Downloading: 100%|██████████| 536M/536M [04:08<00:00, 2.16MB/s]
10/07/2020 09:54:32 - INFO - filelock - Lock 140078150630032 released on /home/user/.cache/torch/transformers/336363d3718f8cc6432db4a768a053f96a9eae064c8c96aff2bc69fe73929770.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5.lock
2020-10-07 09:54:46.214922: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
Some weights of the model checkpoint at bert-base-uncased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['dropout_37', 'classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/media/discoD/pycharm-community-2019.2/plugins/python-ce/helpers/pydev/pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/media/discoD/pycharm-community-2019.2/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/media/discoD/repositorios/transformers_pedro/examples/text-classification/run_tf_text_classification.py", line 283, in <module>
main()
File "/media/discoD/repositorios/transformers_pedro/examples/text-classification/run_tf_text_classification.py", line 258, in main
trainer.train()
File "/media/discoD/repositorios/transformers_pedro/src/transformers/trainer_tf.py", line 474, in train
train_ds = self.get_train_tfdataset()
File "/media/discoD/repositorios/transformers_pedro/src/transformers/trainer_tf.py", line 140, in get_train_tfdataset
raise ValueError("The training dataset must have an asserted cardinality")
ValueError: The training dataset must have an asserted cardinality
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
## An additional info: For my own data, using our bert-portuguese model, we don't have a model based on tensorflow available. So I had to force `from_pt` in the code below to be True, otherwise I would get a different error. The [script which converts pytorch to tensorflow](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_pytorch_checkpoint_to_original_tf.py) doesn't work with TF 2.0.
```
with training_args.strategy.scope():
model = TFAutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_pt=bool(".bin" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7637/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7636/comments | https://api.github.com/repos/huggingface/transformers/issues/7636/events | https://github.com/huggingface/transformers/pull/7636 | 716,483,152 | MDExOlB1bGxSZXF1ZXN0NDk5MjExNjc4 | 7,636 | Model templates | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,651 | 1,605 | MEMBER | null | This PR adds a `cookiecutter`-based utility to generate configuration/modeling/tokenization files, the test suites and the statements across the library necessary for adding a new model.
This PR's goal is to make adding a new model way simpler, by having a simple CLI request information, generate files, that will then need to be edited to implement the changes relative to BERT. The test suites are implemented and run.
Left to do:
- [x] TensorFlow files
- [x] Tokenizer files
- [x] Remove the pooler from the base model
- [x] Ensure the documentation has the right format + .rst file
- [x] Clean-up/refactor the `add_new_model.py` file
- [x] Clarify and add comments to the "dark arts" parts of this PR, such as the `to_replace` file.
- [x] Add encoder-decoder models:
- [x] Modeling PT file
- [x] Configuration file
- [x] Testing the modeling PT file
- [x] Modeling TF file
- [x] Testing the modeling TF file
- [x] Update the RST with the appropriate files
- [x] Add to all auto + init files
- [x] Run the CI on generated files
- [x] Update the LysBERT proposal to something better
- [x] Do a checklist of things left to do after running the script
Possible improvements:
- [ ] Ask the user whether they want to support `token_type_ids`
## For reviewers
If you review this PR, the simplest would be to review the `add_new_model.py` file, and to generate model files using the utility:
```
transformers-cli add_new_model
```
And review the generated files.
Reviewing the current files with `{{cookiecutter.lowercase_modelname}}` doesn't seem reviewer-friendly to me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7636/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7636",
"html_url": "https://github.com/huggingface/transformers/pull/7636",
"diff_url": "https://github.com/huggingface/transformers/pull/7636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7636.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7635/comments | https://api.github.com/repos/huggingface/transformers/issues/7635/events | https://github.com/huggingface/transformers/issues/7635 | 716,443,911 | MDU6SXNzdWU3MTY0NDM5MTE= | 7,635 | ImportError: cannot import name 'RobertaLMHeadModel' | {
"login": "lighteternal",
"id": 22905968,
"node_id": "MDQ6VXNlcjIyOTA1OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22905968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lighteternal",
"html_url": "https://github.com/lighteternal",
"followers_url": "https://api.github.com/users/lighteternal/followers",
"following_url": "https://api.github.com/users/lighteternal/following{/other_user}",
"gists_url": "https://api.github.com/users/lighteternal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lighteternal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lighteternal/subscriptions",
"organizations_url": "https://api.github.com/users/lighteternal/orgs",
"repos_url": "https://api.github.com/users/lighteternal/repos",
"events_url": "https://api.github.com/users/lighteternal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lighteternal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, the `RobertaLMHeadModel` is a PyTorch model, you would need to have PyTorch installed to import it.\r\n\r\nIf you want to use the TensorFlow variant, you should use the `TFRobertaLMHeadModel`",
"Yeap, sorry for not including it in the Environment info, I have torch 1.6 installed. \r\nThe following script:\r\n```\r\nimport transformers \r\nprint(transformers.__version__)\r\nimport torch \r\nprint(torch.__version__)\r\nimport tensorflow \r\nprint(tensorflow.__version__)\r\n\r\nfrom transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig\r\n```\r\nReturns:\r\n```\r\n3.1.0\r\n1.6.0\r\n2.3.1\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n<ipython-input-4-aecd14032a4d> in <module>\r\n 6 print(tensorflow.__version__)\r\n 7 \r\n----> 8 from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig\r\n\r\nImportError: cannot import name 'RobertaLMHeadModel'\r\n```\r\n\r\nThe tflow variant returns the same error.",
"My bad, I read too fast! The error is probably because you're trying to import `RobertaLMHeadModel`, but as it can be seen in your first post, the model is actually `RobertaForCausalLM`. Can you successfully load that model? We plan on having uniform naming for these models so that the `CausalLM` and `LMHeadModel` have the same naming soon.",
"Many thanks for the quick replies! :)\r\n\r\nYes, it can be loaded that way. However, after running the following script: \r\n\r\n```\r\nimport torch\r\nfrom transformers import RobertaTokenizer, RobertaForCausalLM, RobertaConfig\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\nconfig = RobertaConfig.from_pretrained(\"roberta-base\")\r\nconfig.is_decoder = True\r\nmodel = RobertaForCausalLM.from_pretrained('roberta-base', config=config, return_dict=True)\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\nprediction_logits = outputs.logits\r\n```\r\n\r\nThe following error appears: \r\n```\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-99-f1066c26064d> in <module>\r\n 4 config = RobertaConfig.from_pretrained(\"roberta-base\")\r\n 5 config.is_decoder = True\r\n----> 6 model = RobertaForCausalLM.from_pretrained('roberta-base', config=config, return_dict=True)\r\n 7 inputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n 8 outputs = model(**inputs)\r\n\r\n~/translation/DimPapSandbox/greek_text_generation/tsflow23/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 921 \r\n 922 # Instantiate model.\r\n--> 923 model = cls(config, *model_args, **model_kwargs)\r\n 924 \r\n 925 if state_dict is None and not from_tf:\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'return_dict'\r\n```\r\n\r\nIf I remove the `return_dict` argument, another error comes up:\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-100-44c62bef9ec6> in <module>\r\n 7 inputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n 8 outputs = model(**inputs)\r\n----> 9 prediction_logits = outputs.logits\r\n\r\nAttributeError: 'tuple' object has no attribute 'logits' \r\n```\r\n\r\nIf there's a working snippet using the Roberta or XLM-Roberta for text generation, it would be much appreciated. ",
"Indeed, there's an issue with the docstrings here. I'm fixing it in #7642.\r\n\r\nHave you taken a look at the summary of text generation [here](https://huggingface.co/transformers/task_summary.html#text-generation)?\r\n\r\nPlease note that RoBERTa has not been trained to do text generation, but to do mask in-filling, so using a pre-trained RoBERTa model to do generation would yield bad results.",
"Τhanks, I was actually interested in XLM-R (looking for low-resource language text generation) but I stumbled upon the RoBERTa example shown above first so I thought I could just swap the model and it would work. I can confirm that the following code works with xlm-roberta-large: \r\n\r\n```\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer\r\nmodel = AutoModelWithLMHead.from_pretrained(\"xlm-roberta-large\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"xlm-roberta-large\")\r\nprompt = \"Σήμερα ο καιρός\" # means: \"Today the weather\", in Greek\"\r\ninputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors=\"pt\")\r\nprompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))\r\noutputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)\r\ngenerated = prompt + tokenizer.decode(outputs[0])[prompt_length:]\r\nprint(generated)\r\n```\r\nHowever the generated output is not really useful (repeating the word \"weather\"): \r\n\r\n`Σήμερα ο καιρός ιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός`\r\n\r\nΙ understood that XLM-R had a CLM checkpoint but maybe I was wrong. In any case, if you are aware of any pretrained models that could I could try for text prediction (I am interested in Greek, where GPT-2 does not really shine) it would be great. Otherwise, we can close this issue. :)",
"@lighteternal Have you checked whether this model is any good for your use case? https://huggingface.co/nikokons/gpt2-greek?text=%CE%A3%CE%AE%CE%BC%CE%B5%CF%81%CE%B1+%CE%BF+%CE%BA%CE%B1%CE%B9%CF%81%CF%8C%CF%82",
"model author is @nikkon3",
"Hi @julien-c, yes I have tried it in the past. It is notably better compared to the vanilla GPT-2 in most cases (the latter \"has\" Greek tokens in its vocabulary, but the relative corpus that was used must have been extremely small for any useful inference). However even the `nikokons/gpt-2-greek` is sometimes generating sentences that, while syntactically OK, are not relevant to the input context. Probably a larger and more diverse training corpus would help. \r\n\r\nI have been experimenting a while with this, and my conclusion is that for now the most \"robust\" generations for Greek are made by masked LM which are repurposed to causal ones, e.g. If I use a BERT-like model, I put the mask at the end of the unfinished sentence:\r\n`\"This is a great <mask> ...\"` Of course this comes with the problem that I have to reuse the mask's result to feed it as input in case I want more than one token to be predicted. Autoregressive models either return non-sense or drift away from the input really quickly. ",
"yes @lighteternal the dataset that is used for gpt2-greek is not large . It is trained on about 5 GB of text with the main source to be from Greek Wikipedia ."
] | 1,602 | 1,602 | 1,602 | NONE | null | Hi all, I was just trying to run a text generation script for low-resource languages, and therefore experimented with XLM-R and initially with Roberta, using the documentation for RobertaForCausalLM:
here: https://huggingface.co/transformers/model_doc/roberta.html#robertaforcausallm
I am running into the import error shown in the title. See code snippet and error message below. I also experimented with different Tensorflow and transformers versions to no avail. I suspect that the model classes have changed (or the documentation may not be up to date with the current version). I also tried importing RobertaForCausalLM but it returned the same error.
## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- Tensorflow version: 2.3.1
### Who can help
@LysandreJik , @TevenLeScao
Model I am using (**Roberta**, **XLM-Roberta**):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run:
```
from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaLMHeadModel.from_pretrained('roberta-base', config=config, return_dict=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
Error: `ImportError: cannot import name 'RobertaLMHeadModel'`
If this script runs succesfully, I 'd like to re-run it for XMLRoberta (changing the imports and model names of course).
Many thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7635/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7634/comments | https://api.github.com/repos/huggingface/transformers/issues/7634/events | https://github.com/huggingface/transformers/issues/7634 | 716,442,285 | MDU6SXNzdWU3MTY0NDIyODU= | 7,634 | ImportError: cannot import name 'RobertaLMHeadModel' | {
"login": "earendil91",
"id": 72315099,
"node_id": "MDQ6VXNlcjcyMzE1MDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/72315099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/earendil91",
"html_url": "https://github.com/earendil91",
"followers_url": "https://api.github.com/users/earendil91/followers",
"following_url": "https://api.github.com/users/earendil91/following{/other_user}",
"gists_url": "https://api.github.com/users/earendil91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/earendil91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/earendil91/subscriptions",
"organizations_url": "https://api.github.com/users/earendil91/orgs",
"repos_url": "https://api.github.com/users/earendil91/repos",
"events_url": "https://api.github.com/users/earendil91/events{/privacy}",
"received_events_url": "https://api.github.com/users/earendil91/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | NONE | null | Hi all, I was just trying to run a text generation script for low-resource languages, and therefore experimented with XLM-R and initially with Roberta, using the documentation for RobertaForCausalLM:
here: https://huggingface.co/transformers/model_doc/roberta.html#robertaforcausallm
I am running into the import error shown in the title. See code snippet and error message below. I also experimented with different Tensorflow and transformers versions to no avail. I suspect that the model classes have changed (or the documentation may not be up to date with the current version). I also tried importing RobertaForCausalLM but it returned the same error.
## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- Tensorflow version: 2.3.1
### Who can help
@LysandreJik , @TevenLeScao
Model I am using (**Roberta**, **XLM-Roberta**):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run:
```
from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaLMHeadModel.from_pretrained('roberta-base', config=config, return_dict=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
Error: `ImportError: cannot import name 'RobertaLMHeadModel'`
If this script runs succesfully, I 'd like to re-run it for XMLRoberta (changing the imports and model names of course).
Many thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7634/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7633/comments | https://api.github.com/repos/huggingface/transformers/issues/7633/events | https://github.com/huggingface/transformers/issues/7633 | 716,376,357 | MDU6SXNzdWU3MTYzNzYzNTc= | 7,633 | How to get cross attention for bert when config.add_cross_attention is True | {
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @qmeeus - thanks for your issue. This corresponds actually to a larger feature requests since we never return the attention masks at the moment. I will open a discussion about this internally and add it to the projects. ",
"Hi @patrickvonplaten and thank you for your answer ! Let me know if I can help in any way with the developments",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This should be resolved now. Bert2Bert can return cross attention masks with `output_attentions=True`"
] | 1,602 | 1,607 | 1,607 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
As far as I can tell, when using BERT with cross attention and output_attention is True, the returned attention only contains self attention (i.e. a tuple of length num_hidden_layers with size (batch_size, num_head, seq_length, seq_length)). How can I get the cross attention weights in that case?
After digging a little bit, I saw that ModelOutput (and child classes) do not include cross attention as potential outputs. The cross attention is returned by [BertLayer](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L420) (at index 2) but then ignored in [BertEncoder](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L486). A quick look at the outputs for Encoder-Decoder models shows the same issue. Would it be possible to include cross attention in model outputs? And if yes, how can I help doing so?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to similar question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/how-to-get-cross-attention-values-of-t5/970 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7633/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7632/comments | https://api.github.com/repos/huggingface/transformers/issues/7632/events | https://github.com/huggingface/transformers/issues/7632 | 716,370,517 | MDU6SXNzdWU3MTYzNzA1MTc= | 7,632 | Unique names for dataset cache for each tokenizer | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | # 🚀 Feature request
Currently in the examples, dataset caches are named for the family of tokenizer used. For example, 'cached_train_BertTokenizer_128'. This may lead to unexpected behavior when running multiple models/tokenizers within the same model type, but different variations on the model/tokenizer.
In this colab notebook, NER training is used run using scibert and then bert-base-cased. Even though the old data files are removed, the code still uses the old cache, resulting in an indexing error due to the mismatched token indices.
https://colab.research.google.com/drive/1q4uBFm81yBWVNzG3Si2ByBh1nw8fk-Q5?usp=sharing
In this colab notebook, NER training is run on scibert-cased and then scibert-uncased. In this notebook, no explicit error occurs since there isn't an indexing error, but it seems that the wrong dataset is being used. In the output of scibert-uncased, there are many warnings of unpredicted tokens, and a lower than expected score. Both of these do not occur if scibert-cased is not run before scibert-cased
https://colab.research.google.com/drive/1pnpWfRqX4nknc0RRe9A2CArbVok3NbhC?usp=sharing
## Motivation
Prevent unexpected behavior when testing on multiple variations on the same transformer architecture.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7631/comments | https://api.github.com/repos/huggingface/transformers/issues/7631/events | https://github.com/huggingface/transformers/issues/7631 | 716,360,790 | MDU6SXNzdWU3MTYzNjA3OTA= | 7,631 | Is there a fine-tuning script for DPR? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @shamanez - I don't think there is a fine-tuning script for DPR at the moment, but we always welcome contributions as such! @lhoestq might have more information. \r\n",
"I just have one more question about the DPR model used in RAG (specially the **Doc-Encoder network**).\r\n\r\nIs the **doc-encoder** pretrained with a 21-million Wikipedia dump as mentioned in the DPR paper?",
"The DPR encoders (context encoder and question encoder) in RAG are pretrained BERT that were fine-tuned for retrieval on the question/answers pairs of Natural Questions (and other datasets depending on the setup) using retrieved passages from the 21 million passages Wikipedia dump. In the library, the DPR encoders are the one trained on NQ.",
"Thanks a lot. So can I use these encoders to ginetune the rag on customized\ndocument settings given the fact that question encoder also get fine-tuned.\n\nOn Thu, Oct 8, 2020, 21:49 Quentin Lhoest <[email protected]> wrote:\n\n> The DPR encoders (context encoder and question encoder) in RAG are\n> pretrained BERT that were fine-tuned for retrieval on the question/answers\n> pairs of Natural Questions (and other datasets depending on the setup)\n> using retrieved passages from the 21 million passages Wikipedia dump. In\n> the library, the DPR encoders are the one trained on NQ.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705426902>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGXZKERI55XOZQ5TWATSJV4JHANCNFSM4SHEODIA>\n> .\n>\n",
"Yes you can fine-tune it on your documents. During RAG fine-tuning both the generator and the question encoder are updated.",
"Thanks :). So finally what is the best way to arrange customized set of\ndocuments?\n\nOn Thu, Oct 8, 2020, 22:23 Quentin Lhoest <[email protected]> wrote:\n\n> Yes you can fine-tune it on your documents. During RAG fine-tuning both\n> the generator and the question encoder are updated.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705446147>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGSPUR4YUA3F2GTSIJTSJWAIXANCNFSM4SHEODIA>\n> .\n>\n",
"You'll find all the info at https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning :)",
"Amazing. Thanks a lot\n\nOn Thu, Oct 8, 2020, 22:27 Quentin Lhoest <[email protected]> wrote:\n\n> You'll find all the info at\n> https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning\n> :)\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705448525>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGSSRJFHM7YDUQSUFKDSJWAYBANCNFSM4SHEODIA>\n> .\n>\n",
"I kind of checked the finetuning script. It shows how to train for custom\ndatasets. What I don't understand is how should I use my own set of\ndocuments other that wikipedia's dumps.\n\nOn Thu, Oct 8, 2020, 22:27 Quentin Lhoest <[email protected]> wrote:\n\n> You'll find all the info at\n> https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning\n> :)\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705448525>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGSSRJFHM7YDUQSUFKDSJWAYBANCNFSM4SHEODIA>\n> .\n>\n",
"Oh I see. In that case you have to build the RAG knowledge source. We haven't released a code example to do so yet but we're discussing it in #7462 ",
"Ok will follow it.\n\nOn Thu, Oct 8, 2020, 22:36 Quentin Lhoest <[email protected]> wrote:\n\n> Oh I see. In that case you have to build the RAG knowledge source. We\n> haven't released a code example to do so yet but we're discussing it in\n> #7462 <https://github.com/huggingface/transformers/issues/7462>\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705453565>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTDMISMY7F2TSNGA5LSJWB3VANCNFSM4SHEODIA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | CONTRIBUTOR | null | It would be nice to have a fine-tuning script for DPR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7631/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7630/comments | https://api.github.com/repos/huggingface/transformers/issues/7630/events | https://github.com/huggingface/transformers/pull/7630 | 716,352,901 | MDExOlB1bGxSZXF1ZXN0NDk5MTA1NjQ5 | 7,630 | Add GPT2 to sequence classification auto model | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | MEMBER | null | Add `GPT2ForSequenceClassification` to the `AutoModelForSequenceClassification` auto model.
closes #7493. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7630/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7630",
"html_url": "https://github.com/huggingface/transformers/pull/7630",
"diff_url": "https://github.com/huggingface/transformers/pull/7630.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7630.patch",
"merged_at": 1602062406000
} |
https://api.github.com/repos/huggingface/transformers/issues/7629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7629/comments | https://api.github.com/repos/huggingface/transformers/issues/7629/events | https://github.com/huggingface/transformers/pull/7629 | 716,240,315 | MDExOlB1bGxSZXF1ZXN0NDk5MDE1Njc3 | 7,629 | Update model card - Fix arxiv link | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,602 | 1,602 | 1,602 | NONE | null | Minor changes: Add arxiv link + Layout improvement + fix typos
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7629/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7629",
"html_url": "https://github.com/huggingface/transformers/pull/7629",
"diff_url": "https://github.com/huggingface/transformers/pull/7629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7629.patch",
"merged_at": 1602102969000
} |
https://api.github.com/repos/huggingface/transformers/issues/7628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7628/comments | https://api.github.com/repos/huggingface/transformers/issues/7628/events | https://github.com/huggingface/transformers/issues/7628 | 716,223,723 | MDU6SXNzdWU3MTYyMjM3MjM= | 7,628 | The newly added config decoder_start_token_id for bart-base model is wrong? | {
"login": "fomalhautb",
"id": 14837467,
"node_id": "MDQ6VXNlcjE0ODM3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/14837467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fomalhautb",
"html_url": "https://github.com/fomalhautb",
"followers_url": "https://api.github.com/users/fomalhautb/followers",
"following_url": "https://api.github.com/users/fomalhautb/following{/other_user}",
"gists_url": "https://api.github.com/users/fomalhautb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fomalhautb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fomalhautb/subscriptions",
"organizations_url": "https://api.github.com/users/fomalhautb/orgs",
"repos_url": "https://api.github.com/users/fomalhautb/repos",
"events_url": "https://api.github.com/users/fomalhautb/events{/privacy}",
"received_events_url": "https://api.github.com/users/fomalhautb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"A similar problem is here [#5212](https://github.com/huggingface/transformers/issues/5212)",
"Moved there."
] | 1,602 | 1,602 | 1,602 | NONE | null | The new config file for `bart-base` has been updated on October 5.th. The new config file looks like the following:
```
{
...
"bos_token_id": 0,
...
"decoder_start_token_id": 2,
...
"eos_token_id": 2,
...
}
```
the `decoder_start_token_id` was added newly, it wasn't there before. But as far I understand, the `decoder_start_token_id` should be `bos_token_id` as default . The newly added config-line changed the behavior for the `generate` function. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7628/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7627/comments | https://api.github.com/repos/huggingface/transformers/issues/7627/events | https://github.com/huggingface/transformers/pull/7627 | 716,195,686 | MDExOlB1bGxSZXF1ZXN0NDk4OTc4MTcw | 7,627 | Added sampler'set_epoch when use distributed training | {
"login": "graykode",
"id": 10525011,
"node_id": "MDQ6VXNlcjEwNTI1MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10525011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graykode",
"html_url": "https://github.com/graykode",
"followers_url": "https://api.github.com/users/graykode/followers",
"following_url": "https://api.github.com/users/graykode/following{/other_user}",
"gists_url": "https://api.github.com/users/graykode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graykode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graykode/subscriptions",
"organizations_url": "https://api.github.com/users/graykode/orgs",
"repos_url": "https://api.github.com/users/graykode/repos",
"events_url": "https://api.github.com/users/graykode/events{/privacy}",
"received_events_url": "https://api.github.com/users/graykode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,602 | 1,608 | 1,608 | NONE | null | `run_squad.py` file is independent of `Trainer Class`(https://github.com/huggingface/transformers/issues/4398). Therefore, there is no method related to `set_epoch` in distributed training. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7627/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7627",
"html_url": "https://github.com/huggingface/transformers/pull/7627",
"diff_url": "https://github.com/huggingface/transformers/pull/7627.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7627.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7626/comments | https://api.github.com/repos/huggingface/transformers/issues/7626/events | https://github.com/huggingface/transformers/issues/7626 | 716,186,615 | MDU6SXNzdWU3MTYxODY2MTU= | 7,626 | Unable to pass encoder_outputs to generate calls | {
"login": "gabisurita",
"id": 4023375,
"node_id": "MDQ6VXNlcjQwMjMzNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4023375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabisurita",
"html_url": "https://github.com/gabisurita",
"followers_url": "https://api.github.com/users/gabisurita/followers",
"following_url": "https://api.github.com/users/gabisurita/following{/other_user}",
"gists_url": "https://api.github.com/users/gabisurita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabisurita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabisurita/subscriptions",
"organizations_url": "https://api.github.com/users/gabisurita/orgs",
"repos_url": "https://api.github.com/users/gabisurita/repos",
"events_url": "https://api.github.com/users/gabisurita/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabisurita/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @gabisurita - I understand that one might want to forward encoder_outputs in the generate function. However, adding such a possibility opens the door for many problems in case `beam_search` is chosen. We are currently working on a bigger refactor that should solve this problem by a better design choice of `generate()`. I'm afraid that this will still take ~3,4 weeks to complete though.",
"Hi @patrickvonplaten,\r\n\r\nI've noticed that your PR refactoring generate was merged. It seems a big improvement, thank you! Still, `input_ids` is still required or overridden by an empty tensor. It's still not clear to me how can I use the new API with `encoder_outputs` or `imput_embeds`.",
"Hey @gabisurita - if you look at the tests, you can now directly use `beam_search` instead of generate for your use case I think :-). Here the tests: https://github.com/huggingface/transformers/blob/a7d73cfdd497d7bf6c9336452decacf540c46e20/tests/test_generation_utils.py#L295 \r\n\r\nFrom the tests, it should be quite easy to understand how to use `beam_search` directly I think :-) \r\n\r\nLet me know if that helps!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,602 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Github Main branch
### Who can help
TextGeneration: @TevenLeScao
## Information
Model I am using (Bert, XLNet ...): T5
I'm unable to pass preconputed encoder_outputs to the `.generate()` method.
I've tried defining:
```python
model_kwargs = {"encoder_outputs": encoder_outputs}
output = model.generate(model_kwargs)
```
But I've noticed some validation errors for `input_ids`. Even if I replace input_ids with dummy values, I've noticed the model_kwargs is always replaced here:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L448
I think it can be fixed (without optimization) by just replacing:
```python
model_kwargs["encoder_outputs"] = encoder_outputs
```
with:
```python
model_kwargs.setdefault("encoder_outputs", encoder_outputs)
```
If you agree I can try to open a PR to fix this.
Best, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7626/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7625/comments | https://api.github.com/repos/huggingface/transformers/issues/7625/events | https://github.com/huggingface/transformers/pull/7625 | 716,114,179 | MDExOlB1bGxSZXF1ZXN0NDk4OTExOTMw | 7,625 | Create README.md | {
"login": "lanwuwei",
"id": 8783580,
"node_id": "MDQ6VXNlcjg3ODM1ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8783580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanwuwei",
"html_url": "https://github.com/lanwuwei",
"followers_url": "https://api.github.com/users/lanwuwei/followers",
"following_url": "https://api.github.com/users/lanwuwei/following{/other_user}",
"gists_url": "https://api.github.com/users/lanwuwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lanwuwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanwuwei/subscriptions",
"organizations_url": "https://api.github.com/users/lanwuwei/orgs",
"repos_url": "https://api.github.com/users/lanwuwei/repos",
"events_url": "https://api.github.com/users/lanwuwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/lanwuwei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"will merge in the meantime, @lanwuwei \r\n\r\nFeel free to re-open a PR to update if needed."
] | 1,602 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7625/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7625",
"html_url": "https://github.com/huggingface/transformers/pull/7625",
"diff_url": "https://github.com/huggingface/transformers/pull/7625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7625.patch",
"merged_at": 1603283380000
} |
https://api.github.com/repos/huggingface/transformers/issues/7624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7624/comments | https://api.github.com/repos/huggingface/transformers/issues/7624/events | https://github.com/huggingface/transformers/issues/7624 | 716,108,290 | MDU6SXNzdWU3MTYxMDgyOTA= | 7,624 | Free Inference API Not Accessible | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Nathan, can you send us a quick email to [email protected]? \r\n\r\nThe free version of the Hub's Inference API is still up, but maybe you've hit the rate limiting?",
"Resolved. Ended up being a misinterpretation of my part on error codes that HF's API produces."
] | 1,602 | 1,602 | 1,602 | CONTRIBUTOR | null | Hi, I attempted to use the free version of the Model Hub's Inference API. However, it is not working for me anymore:

I do have an account and I am signed in when I get the above message. I also have done my email verification as well. Also, when I try to send a POST request using curl with my API token, I get a 503 error.
I was able to successfully use the free version on October 4th, 2020 and so I was wondering, is this a bug or is the free version no longer available? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7624/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7623/comments | https://api.github.com/repos/huggingface/transformers/issues/7623/events | https://github.com/huggingface/transformers/issues/7623 | 716,040,970 | MDU6SXNzdWU3MTYwNDA5NzA= | 7,623 | Implement PyTorch and/or TensorFlow sequence classification architectures for causal language models | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | {
"login": "pasDamola",
"id": 26023424,
"node_id": "MDQ6VXNlcjI2MDIzNDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/26023424?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pasDamola",
"html_url": "https://github.com/pasDamola",
"followers_url": "https://api.github.com/users/pasDamola/followers",
"following_url": "https://api.github.com/users/pasDamola/following{/other_user}",
"gists_url": "https://api.github.com/users/pasDamola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pasDamola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pasDamola/subscriptions",
"organizations_url": "https://api.github.com/users/pasDamola/orgs",
"repos_url": "https://api.github.com/users/pasDamola/repos",
"events_url": "https://api.github.com/users/pasDamola/events{/privacy}",
"received_events_url": "https://api.github.com/users/pasDamola/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pasDamola",
"id": 26023424,
"node_id": "MDQ6VXNlcjI2MDIzNDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/26023424?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pasDamola",
"html_url": "https://github.com/pasDamola",
"followers_url": "https://api.github.com/users/pasDamola/followers",
"following_url": "https://api.github.com/users/pasDamola/following{/other_user}",
"gists_url": "https://api.github.com/users/pasDamola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pasDamola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pasDamola/subscriptions",
"organizations_url": "https://api.github.com/users/pasDamola/orgs",
"repos_url": "https://api.github.com/users/pasDamola/repos",
"events_url": "https://api.github.com/users/pasDamola/events{/privacy}",
"received_events_url": "https://api.github.com/users/pasDamola/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @LysandreJik is this issue still open? I'll like to pick it up",
"I believe @fmcurti is working on the OpenAI GPT implementation, but both CTRL and TransfoXL are still open! Would love a PR!",
"Hi Lysandre, thanks for assigning this issue to me. I've been trying to\nsetup transformers on my local machine (Windows 10). I've been having\nseveral issues with the setup.\nIs there any guide I could follow?\n\nThanks much\n\nOn Fri, 9 Oct 2020 at 12:38, Lysandre Debut <[email protected]>\nwrote:\n\n> Assigned #7623 <https://github.com/huggingface/transformers/issues/7623>\n> to @pasDamola <https://github.com/pasDamola>.\n>\n> —\n> You are receiving this because you were assigned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7623#event-3859742526>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AGGRMAH2H4Q5JYISSW23W3LSJ3Y4HANCNFSM4SGSU6FQ>\n> .\n>\n",
"Sure, have you taken a look at the [`CONTRIBUTING.md` document](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)? What issues have you been having?",
"Yes I have.\r\nWhen I run `pip install -e \".[dev]\"`, I always encounter this error. I'm also running it in anaconda environment\r\n\r\n\r\n\r\n`",
"I believe the repo cannot be installed from conda as of now, can you use a pip virtual environment?",
"Alright, I'll try that now",
"Still having the same error I had while in conda. I'm trying to install tensorflow locally and retry this again",
"Hi @LysandreJik , I'm still having the same errors running on a pip virtual environment",
"Do you manage to install `TensorFlow` in your pip environment?",
"Hi @LysandreJik not yet. I get a similar error. I'm trying to look for solutions on the internet\r\n\r\n\r\n",
"Hi @LysandreJik – has anyone picked up the CTRL or TransfoXL architectures yet? I'd love to take a crack at one of them if available. Thank you!",
"No, feel free to take a crack at it! Let me know and I'll put you in the issue description.",
"is there anybody working on these ? @LysandreJik ",
"I believe CTRL and TransfoXL are still available. Feel free to open a PR!",
"Hi @LysandreJik , \r\n\r\nAs this Feature request is closed\r\n\r\nDo we need TF implementation of causal models GPT-1, Transfoxl and CTRL?\r\nI'm ready to contributed for that as well.\r\n\r\n",
"That would be very welcome @spatil6!",
"Ok thanks @LysandreJik.\r\n\r\nI'm waiting for this PR #8714 to get merge.\r\nOnce done, I'll raise PR for these models as well."
] | 1,602 | 1,606 | 1,606 | MEMBER | null | # 🚀 Feature request
The architecture `GPT2ForSequenceClassification` was added in #7501 in PyTorch. It would be great to have it in TensorFlow (cf. issues #7622), but it would also be great to have it for other causal models: ~OpenAI GPT~, ~CTRL~ (PR opened @elk-cloner), ~TransfoXL~ (PR opened @spatil6)
Below is a list of items to follow to make sure the integration of such an architecture is complete:
- Implement `XXXForSequenceClassification` in `modeling_xxx.py` or `TFXXXForSequenceClassification` in `modeling_tf_xxx.py
- Test that architecture in `tests/test_modeling_xxx.py` or `tests/test_modeling_tf_xxx.py`
- Add that architecture to `__init__.py` and ` docs/source/model_doc/xxx.rst`.
Taking a look at the code changes in #7501 would be a good start.
A very good first issue to get acquainted with the library and its architectures!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7623/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7622/comments | https://api.github.com/repos/huggingface/transformers/issues/7622/events | https://github.com/huggingface/transformers/issues/7622 | 716,038,321 | MDU6SXNzdWU3MTYwMzgzMjE= | 7,622 | Implement a TF2 version of `GPT2ForSequenceClassification` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | {
"login": "y2s82",
"id": 1997371,
"node_id": "MDQ6VXNlcjE5OTczNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1997371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y2s82",
"html_url": "https://github.com/y2s82",
"followers_url": "https://api.github.com/users/y2s82/followers",
"following_url": "https://api.github.com/users/y2s82/following{/other_user}",
"gists_url": "https://api.github.com/users/y2s82/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y2s82/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y2s82/subscriptions",
"organizations_url": "https://api.github.com/users/y2s82/orgs",
"repos_url": "https://api.github.com/users/y2s82/repos",
"events_url": "https://api.github.com/users/y2s82/events{/privacy}",
"received_events_url": "https://api.github.com/users/y2s82/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "y2s82",
"id": 1997371,
"node_id": "MDQ6VXNlcjE5OTczNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1997371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y2s82",
"html_url": "https://github.com/y2s82",
"followers_url": "https://api.github.com/users/y2s82/followers",
"following_url": "https://api.github.com/users/y2s82/following{/other_user}",
"gists_url": "https://api.github.com/users/y2s82/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y2s82/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y2s82/subscriptions",
"organizations_url": "https://api.github.com/users/y2s82/orgs",
"repos_url": "https://api.github.com/users/y2s82/repos",
"events_url": "https://api.github.com/users/y2s82/events{/privacy}",
"received_events_url": "https://api.github.com/users/y2s82/received_events",
"type": "User",
"site_admin": false
}
] | [
"May I take a crack at this? :)",
"Yes, you may! Thanks @y2s82!",
"Hi @LysandreJik , I have completed development for this FR. can you please assign it to me, so i'll raise PR for it.",
"Feel free to open a PR"
] | 1,602 | 1,607 | 1,607 | MEMBER | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The architecture `GPT2ForSequenceClassification` was added in #7501 in PyTorch. It would be great to have it in TensorFlow as well.
Below is a list of items to follow to make sure the integration is complete:
- Implement `TFGPT2ForSequenceClassification` in `modeling_tf_gpt2.py`
- Test that architecture in `tests/test_modeling_tf_gpt2.py`
- Add that architecture to `__init__.py` and ` docs/source/model_doc/gpt2.rst`.
Taking a look at the code changes in #7501 would be a good start, as this PR would essentially be a TF2 copy of it.
A very good first issue to get acquainted with the library and its architectures! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7622/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.