url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/8521
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8521/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8521/comments
https://api.github.com/repos/huggingface/transformers/issues/8521/events
https://github.com/huggingface/transformers/issues/8521
742,481,553
MDU6SXNzdWU3NDI0ODE1NTM=
8,521
Tagged versions of model in new model hub don't work
{ "login": "brandenchan", "id": 33759007, "node_id": "MDQ6VXNlcjMzNzU5MDA3", "avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brandenchan", "html_url": "https://github.com/brandenchan", "followers_url": "https://api.github.com/users/brandenchan/followers", "following_url": "https://api.github.com/users/brandenchan/following{/other_user}", "gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}", "starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions", "organizations_url": "https://api.github.com/users/brandenchan/orgs", "repos_url": "https://api.github.com/users/brandenchan/repos", "events_url": "https://api.github.com/users/brandenchan/events{/privacy}", "received_events_url": "https://api.github.com/users/brandenchan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looking into it right now (cc @Pierrci)", "looks like we were supporting lightweight tags but not annotated ones. ", "Should be fixed: https://huggingface.co/deepset/roberta-base-squad2/tree/v2.0", "That fixed the problem! Thanks very much", "Thanks, and please keep the feedback coming! <3" ]
1,605
1,605
1,605
CONTRIBUTOR
null
Our model, deepset/roberta-base-squad2 was originally uploaded under the old style model hub. I have committed a new version of the deepset/roberta-base-squad2 model onto the model hub using the new git based system introduced in transformers 3.5.0. I have 2 tags (v1.0 and v2.0) that I have also pushed to the repo. The tags show up in the model hub drop down but when I click on either of the tags, it says "Not Found: Error: Invalid rev id". It seems I cannot load the models when I specify `revision=v1.0` or `revision=v2.0`. If I don't specify a revision, it seems to load a model though I'm not sure which. This is the code I used: ``` tokenizer = AutoTokenizer.from_pretrained( "deepset/roberta-base-squad2", revision="v2.0" # tag name, or branch name, or commit hash ) ``` What steps can I take so that I can access both versions through the model hub website, and by specifying name and revision? Thanks, Branden @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8521/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8520
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8520/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8520/comments
https://api.github.com/repos/huggingface/transformers/issues/8520/events
https://github.com/huggingface/transformers/pull/8520
742,465,173
MDExOlB1bGxSZXF1ZXN0NTIwNTk0MDk0
8,520
Model sharing doc: more tweaks
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "feel free to merge this when it's a good time!" ]
1,605
1,605
1,605
MEMBER
null
cc @Pierrci
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8520/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8520", "html_url": "https://github.com/huggingface/transformers/pull/8520", "diff_url": "https://github.com/huggingface/transformers/pull/8520.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8520.patch", "merged_at": 1605287427000 }
https://api.github.com/repos/huggingface/transformers/issues/8519
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8519/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8519/comments
https://api.github.com/repos/huggingface/transformers/issues/8519/events
https://github.com/huggingface/transformers/issues/8519
742,402,860
MDU6SXNzdWU3NDI0MDI4NjA=
8,519
MLflowCallback to log run_name argument
{ "login": "HenryMaguire", "id": 2844109, "node_id": "MDQ6VXNlcjI4NDQxMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2844109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HenryMaguire", "html_url": "https://github.com/HenryMaguire", "followers_url": "https://api.github.com/users/HenryMaguire/followers", "following_url": "https://api.github.com/users/HenryMaguire/following{/other_user}", "gists_url": "https://api.github.com/users/HenryMaguire/gists{/gist_id}", "starred_url": "https://api.github.com/users/HenryMaguire/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HenryMaguire/subscriptions", "organizations_url": "https://api.github.com/users/HenryMaguire/orgs", "repos_url": "https://api.github.com/users/HenryMaguire/repos", "events_url": "https://api.github.com/users/HenryMaguire/events{/privacy}", "received_events_url": "https://api.github.com/users/HenryMaguire/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Please don't hesitate to suggest a PR, `run_name` is there just for this reason!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "It will be useful to have this, and it will organize all the experiments better.", "Is there any update on this issue and #12841? I have a very simple one-line solution of passing `args.run_name` (which currently serves for `wandb`) to `mlflow.start_run` that can fix this. I can submit a PR in case of need.", "@HenryMaguire can you send link to your notebook or colab?" ]
1,605
1,643
1,611
NONE
null
# 🚀 Feature request When using the MLflowCallback (set as default for Trainer), I would like to log the `run_name` argument passed to TrainingArguments as the Run Name on the MLflow dashboard. Currently runs are being logged as nameless. E.g. see below. ![Screenshot 2020-11-13 at 10 29 32](https://user-images.githubusercontent.com/2844109/99062592-21abd500-259b-11eb-9caf-c43d2bcb31dc.png) ## Motivation Trainer makes training 🤗 models so easy and MLflow is great for organising experiments/caching artifacts. I would like to make it easier to organise experimental runs and make research easier, particularly for larger teams. This feature would be a very simple patch on the original PR #8016 example usage ``` training_args = TrainingArguments( label_names=['labels_t1', 'labels_t2'], output_dir='./runs, # output directory run_name='multitask_clf_<run_name>', ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset ) ``` ## Your contribution I can submit a PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8519/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8519/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8518
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8518/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8518/comments
https://api.github.com/repos/huggingface/transformers/issues/8518/events
https://github.com/huggingface/transformers/pull/8518
742,388,374
MDExOlB1bGxSZXF1ZXN0NTIwNTI5OTY2
8,518
[T5] Bug correction & Refactor
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have tested both \"refactor_t5\" and \"major_t5_refactor\" branches using \"T5ForConditionalGeneration\".\r\n\r\nI didn't re-convert the Tensorflow to Pytorch, since you already told me the conversion process is not the problem.\r\n\r\nDoesn't seem it solved our issue yet, but it gives different rubbish output. Maybe, it is a good step into the right direction.\r\n\r\nThanks Patrick.\r\n\r\n", "Hi @patrickvonplaten, does this PR effects the checkpoint one gets when calling ```AutoModel.from_pretrained(\"t5-3b\")```?\r\n\r\n I am investigating why my results with transformers 3.3.1 and T5 changed and encountered this, according to the date seems like it was merged in version 3.5.0. (edit: I see now it was merged in 4.0.0 https://github.com/huggingface/transformers/releases/tag/v4.0.0)\r\n\r\nI wonder if this changes the T5 weights/checkpoint I get with my version?" ]
1,605
1,620
1,605
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> **!!!BUG DETECTION!!!** While integrating T5v1.1/mT5, a bug in T5 was detected. T5 actually never uses a `relative_position_bias` in the `EncDecSelfAttention` layer. Previously we used a bi-directional `relative_position_bias` in the `EncDecSelfAttention`, which is wrong IMO (see https://github.com/huggingface/transformers/issues/6285#issuecomment-702371111 for reference). An integration test against original T5 model was added to make sure removing `relative_posiiton_bias` is the correct behavior (in case @craffel reads this - maybe you could confirm :-)). Luckily, the bug did not significantly influence the results as can be seen by the very minor changes in the slow tests. This is also why it wasn't noticed earlier. => Oo all pre-trained & fine-tuned T5Models still work! In addition this PR: - Refactor: clean the code, and remove unnecessarily complicated code - Remove `n_positions` / `max_position_embeddings` from the config, since T5 is not limited by a fixed learned position embedding matrix, see: https://github.com/huggingface/transformers/issues/8047 Fixes #8047 Also cc @agemagician for information (I highly doubt though that this will fix the problem we have in your case) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8518/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8518/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8518", "html_url": "https://github.com/huggingface/transformers/pull/8518", "diff_url": "https://github.com/huggingface/transformers/pull/8518.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8518.patch", "merged_at": 1605283052000 }
https://api.github.com/repos/huggingface/transformers/issues/8517
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8517/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8517/comments
https://api.github.com/repos/huggingface/transformers/issues/8517/events
https://github.com/huggingface/transformers/issues/8517
742,321,290
MDU6SXNzdWU3NDIzMjEyOTA=
8,517
XLM-RoBERTa tokenizer changes characters during tokenization
{ "login": "konstantinmiller", "id": 2629945, "node_id": "MDQ6VXNlcjI2Mjk5NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/2629945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konstantinmiller", "html_url": "https://github.com/konstantinmiller", "followers_url": "https://api.github.com/users/konstantinmiller/followers", "following_url": "https://api.github.com/users/konstantinmiller/following{/other_user}", "gists_url": "https://api.github.com/users/konstantinmiller/gists{/gist_id}", "starred_url": "https://api.github.com/users/konstantinmiller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konstantinmiller/subscriptions", "organizations_url": "https://api.github.com/users/konstantinmiller/orgs", "repos_url": "https://api.github.com/users/konstantinmiller/repos", "events_url": "https://api.github.com/users/konstantinmiller/events{/privacy}", "received_events_url": "https://api.github.com/users/konstantinmiller/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, this is unfortunately a limit of this tokenizer. We try to stay as close as possible to the original implementation, so we will not be able to change this behavior.", "I see. Well, it's not a big deal, as the characters it replaces are quite rare and not really important, at least not for my application. So, I just replace them upfront in the text to make sure that the output of the tokenizer still matches the input.\r\n\r\nThanks for the awesome work you are doing by providing this and other libraries!", "Glad you like it :)" ]
1,605
1,605
1,605
NONE
null
I'm using `transformers` 3.5.0. Whenever the XLM-RoBERTa tokenizer encounters some characters such as '²' (super-script 2, `ord('²')` is 178), it converts them to other characters (in this example, to the plain '2', with `ord('2')` being 50). That is, with `tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')`, `tokenizer.tokenize('²')` or, altnernatively, `tokenizer.tokenize(chr(178))` returns `['▁2']`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8517/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8516
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8516/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8516/comments
https://api.github.com/repos/huggingface/transformers/issues/8516/events
https://github.com/huggingface/transformers/pull/8516
742,314,800
MDExOlB1bGxSZXF1ZXN0NTIwNDcwMzE3
8,516
SWA
{ "login": "josh-cooper", "id": 41467557, "node_id": "MDQ6VXNlcjQxNDY3NTU3", "avatar_url": "https://avatars.githubusercontent.com/u/41467557?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josh-cooper", "html_url": "https://github.com/josh-cooper", "followers_url": "https://api.github.com/users/josh-cooper/followers", "following_url": "https://api.github.com/users/josh-cooper/following{/other_user}", "gists_url": "https://api.github.com/users/josh-cooper/gists{/gist_id}", "starred_url": "https://api.github.com/users/josh-cooper/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josh-cooper/subscriptions", "organizations_url": "https://api.github.com/users/josh-cooper/orgs", "repos_url": "https://api.github.com/users/josh-cooper/repos", "events_url": "https://api.github.com/users/josh-cooper/events{/privacy}", "received_events_url": "https://api.github.com/users/josh-cooper/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8516/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8516", "html_url": "https://github.com/huggingface/transformers/pull/8516", "diff_url": "https://github.com/huggingface/transformers/pull/8516.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8516.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8515
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8515/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8515/comments
https://api.github.com/repos/huggingface/transformers/issues/8515/events
https://github.com/huggingface/transformers/pull/8515
742,299,562
MDExOlB1bGxSZXF1ZXN0NTIwNDYwNTAx
8,515
Adding the prepare_seq2seq_batch function to ProphetNet
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For fixing the check_code_quality failure, I run `make style` in the repository, but it tries to re-format so many files.\r\nShould I check some settings for `black`?", "I changed the environment and made it work again according to the instructions, and it seems that formatting with black works appropriately. Sorry for the bother you.", "@patrickvonplaten \r\nThank you for reviewing and merging the PR! I'm so happy to read your comment.\r\n\r\nI haven't tried using the full dataset for fine-tuning ProphetNet with the seq2seq trainer, but I think I can try it by adding some modifications to my test code used during my implementation process.\r\nI will try it and would like to post the results on the URL if I can get something interesting!\r\n", "@patrickvonplaten \r\nI've just posted my fine-tuning experiment result on https://discuss.huggingface.co/t/how-can-i-do-text-summarization-using-prophetnet/1661/2.\r\n\r\nI'm sorry it is not the case of using the full dataset.\r\nConsidering the time limitations of the execution environment, I used only about one-tenth of the dataset for now, but I think we could get better results if we used the entire dataset.", "That's already of great help - thank you so much!", "It's my pleasure!" ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? I tried to use ProphetNet with Seq2SeqTrainer, but it failed. The error message told me: this is because the collator uses `prepare_seq2seq_batch()` in `_encode()`, but `prepare_seq2seq_batch()` is not implemented in ProphetNet Tokenizer. I've gotten kind advices in the HuggingFace forum, and implemented the function. https://discuss.huggingface.co/t/the-reason-prepare-seq2seq-batch-for-prophetnet-is-not-existed/1758 The modifications are as below: - Add `prepare_seq2seq_batch()` in `/src/transformers/tokenization_prophetnet.py`. - To use .view in loss computation in Seq2SeqTrainer, I add a part where it is confirmed that logits is contiguous in `/src/transformers/modeling_prophetnet.py`. I've checked it works on CPU and GPU as below: ``` !python finetune_trainer.py \ --learning_rate=3e-5 \ --do_train --do_eval --evaluate_during_training \ --max_source_length 511 \ --per_device_train_batch_size 2 \ --predict_with_generate \ --n_train 300 \ --n_val 100 \ --model_name_or_path microsoft/prophetnet-large-uncased \ --data_dir $XSUM_DIR \ --output_dir tmp_gpu \ --overwrite_output_dir ``` Although the `PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES` is 512, If `--max_source_lenght` is set to 512, `CUDA error` occurs. I'm sorry, but I have not been able to identify the cause of this. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. https://discuss.huggingface.co/t/the-reason-prepare-seq2seq-batch-for-prophetnet-is-not-existed/1758 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? https://github.com/forest1988/colaboratory/blob/main/prophetnet_seq2seqtrainer.ipynb I'm sorry, I misunderstood what is being asked here. Now I understood that the `./tests/` code is needed. ~~I'm working on this, but I'm getting errors in formatting etc. and using `black` won't fix it.~~ I added related content to the `test_tokenization_prophetnet.py`. I changed the environment and made it work again according to the instructions, and it seems that formatting with `black` works appropriately. ## Who can review? @patrickvonplaten @sshleifer Thank you for kindly answering my questions in the forum!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8515/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8515", "html_url": "https://github.com/huggingface/transformers/pull/8515", "diff_url": "https://github.com/huggingface/transformers/pull/8515.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8515.patch", "merged_at": 1605532706000 }
https://api.github.com/repos/huggingface/transformers/issues/8514
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8514/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8514/comments
https://api.github.com/repos/huggingface/transformers/issues/8514/events
https://github.com/huggingface/transformers/issues/8514
742,265,651
MDU6SXNzdWU3NDIyNjU2NTE=
8,514
How to pretrain the model (like Roberta) again?
{ "login": "drxmy", "id": 39789137, "node_id": "MDQ6VXNlcjM5Nzg5MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/39789137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drxmy", "html_url": "https://github.com/drxmy", "followers_url": "https://api.github.com/users/drxmy/followers", "following_url": "https://api.github.com/users/drxmy/following{/other_user}", "gists_url": "https://api.github.com/users/drxmy/gists{/gist_id}", "starred_url": "https://api.github.com/users/drxmy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drxmy/subscriptions", "organizations_url": "https://api.github.com/users/drxmy/orgs", "repos_url": "https://api.github.com/users/drxmy/repos", "events_url": "https://api.github.com/users/drxmy/events{/privacy}", "received_events_url": "https://api.github.com/users/drxmy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The authors of the Transformers library wrote a script for this. You can find it under the examples folder -> language modeling [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling). ", "> The authors of the Transformers library wrote a script for this. You can find it under the examples folder -> language modeling [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling).\r\n\r\nThank you! I will check it out.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
I don't want to pretrain the model from scratch. I have some dataset related to my task. I want to pretrain the model from transformers the second time. Could someone give me some advice on how to do it or which document to read? Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8514/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8513
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8513/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8513/comments
https://api.github.com/repos/huggingface/transformers/issues/8513/events
https://github.com/huggingface/transformers/issues/8513
742,205,852
MDU6SXNzdWU3NDIyMDU4NTI=
8,513
Using Pretrained BERT model to add additional words that are not recognized by the model
{ "login": "geo47", "id": 1557880, "node_id": "MDQ6VXNlcjE1NTc4ODA=", "avatar_url": "https://avatars.githubusercontent.com/u/1557880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geo47", "html_url": "https://github.com/geo47", "followers_url": "https://api.github.com/users/geo47/followers", "following_url": "https://api.github.com/users/geo47/following{/other_user}", "gists_url": "https://api.github.com/users/geo47/gists{/gist_id}", "starred_url": "https://api.github.com/users/geo47/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geo47/subscriptions", "organizations_url": "https://api.github.com/users/geo47/orgs", "repos_url": "https://api.github.com/users/geo47/repos", "events_url": "https://api.github.com/users/geo47/events{/privacy}", "received_events_url": "https://api.github.com/users/geo47/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You don't need to do either of those things! You should add tokens to your tokenizer by leveraging the `add_tokens()` method, and then resize your model's embedding matrix.\r\n\r\nThen, you should train your model on a dataset that has those entities so that it understands the meaning of these entities. It seems to be what you're doing here, so just make sure to add the tokens to your tokenizer first. You can see the doc about it [here](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_tokens#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens).\r\n\r\nAlso, we try to keep the github issues only for bugs and feature requests. Please ask questions/discussions on the [instead](https://discuss.huggingface.co). Thanks!" ]
1,605
1,605
1,605
NONE
null
Hello, I want some help regarding adding additional words in the existing BERT model. I have two quires kindly guide me: I am working on NER task for a domain: There are few words (not sure the exact numbers) that BERT recognized as [UNK], but those entities are required for the model to recognize. The pretrained model learns well (up to 80%) accuracy on "bert-base-cased" while providing labeled data and fine-tune the model but intuitively the model will learn better if it recognize all the entities. 1. Do i need to add those unknown entities in vocabs.txt and train the model again? 2. Do i need to train the BERT model on my data from Scratch? Thanks...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8513/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8512
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8512/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8512/comments
https://api.github.com/repos/huggingface/transformers/issues/8512/events
https://github.com/huggingface/transformers/issues/8512
742,152,232
MDU6SXNzdWU3NDIxNTIyMzI=
8,512
Issue while model sharing and uploading on huggingface
{ "login": "saburbutt", "id": 33926182, "node_id": "MDQ6VXNlcjMzOTI2MTgy", "avatar_url": "https://avatars.githubusercontent.com/u/33926182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saburbutt", "html_url": "https://github.com/saburbutt", "followers_url": "https://api.github.com/users/saburbutt/followers", "following_url": "https://api.github.com/users/saburbutt/following{/other_user}", "gists_url": "https://api.github.com/users/saburbutt/gists{/gist_id}", "starred_url": "https://api.github.com/users/saburbutt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saburbutt/subscriptions", "organizations_url": "https://api.github.com/users/saburbutt/orgs", "repos_url": "https://api.github.com/users/saburbutt/repos", "events_url": "https://api.github.com/users/saburbutt/events{/privacy}", "received_events_url": "https://api.github.com/users/saburbutt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You cannot save directly remotely like this (though it could be nice to be able to do this in the future, cc @madlag).\r\n\r\nYou need to first create a repo with `transformers-cli repo create`, or directly on the website.\r\n\r\nThen clone it locally and add your files, then push.\r\n\r\nHopefully #8520 makes it clearer?\r\n", "yes. Thankyou. :)", "Hello, I have the same issue with different outputs. I followed all steps in https://huggingface.co/transformers/model_sharing.html \r\nin order, entered my account with transformers-cli login, created repo, installed lfs, cloned repo, added BERT model via \"git add BERTMODEL\", committed and pushed but always I got the same error. \r\n\r\nremote: \r\nremote: -------------------------------------------------------------------------\r\nremote: Your push was rejected because it contains files larger than 10M.\r\nremote: Please use https://git-lfs.github.com/ to store larger files.\r\nremote: -------------------------------------------------------------------------\r\nremote: \r\nremote: Offending files:\r\nremote: - BERTMODEL (ref: refs/heads/main)\r\nTo https://huggingface.co/Serdar/your-model-name\r\n ! [remote rejected] main -> main (pre-receive hook declined)\r\n\r\nOS: Ubuntu 20.04\r\n\r\nI already used \"git lfs install\" but I could not figure out this problem. I wish someone can help", "@serdarakyol Please make sure you read the Getting started guide at https://git-lfs.github.com/ – in your case I think you didn't lfs-track your actual model file", "@julien-c Thank you so much. fixed the problem" ]
1,605
1,614
1,605
NONE
null
The model I am using (Bert, XLNet ...): Roberta for questionanswering I am trying to follow the tutorial given in https://huggingface.co/transformers/model_sharing.html and while I am able to load my model in the local repository, I am unable to save my model and tokenizer using "model.save_pretrained("https://huggingface.co/saburbutt/testing") tokenizer.save_pretrained("https://huggingface.co/saburbutt/testing")" If I try to open the link it says "Cannot GET /saburbutt/testing/tokenizer_config.json" While I try to "echo "hello" >> README.md" or use git functions, It gives me the error "fatal: not a git repository (or any of the parent directories): .git" The task I am working on is SQuAD <!-- A clear and concise description of what you would expect to happen. --> I expect the model to be saved in the huggingface/saburbutt/testing repository
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8512/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8511
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8511/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8511/comments
https://api.github.com/repos/huggingface/transformers/issues/8511/events
https://github.com/huggingface/transformers/issues/8511
742,073,991
MDU6SXNzdWU3NDIwNzM5OTE=
8,511
Adding Confusion matrix support in Trainer
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there! We love contributions, but metrics should now be implemented directly in the datasets library, not inside Transformers. So you should check there if it does not already exist, and if not, suggest a Pr on that repository :-)", "Okay then, I will close it." ]
1,605
1,605
1,605
CONTRIBUTOR
null
I want to add confusion matrix support in Trainer. It would be a useful addition. The only dependency would be `sklearn` which this library already uses for metrics. It can allow users to better understand about predictions coming from model. @sgugger Let me know if this is something you want to add.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8511/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8510
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8510/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8510/comments
https://api.github.com/repos/huggingface/transformers/issues/8510/events
https://github.com/huggingface/transformers/issues/8510
742,039,909
MDU6SXNzdWU3NDIwMzk5MDk=
8,510
Finetune TFBertForMaskedLM model.fit() ValueError
{ "login": "MarsSu0618", "id": 72376532, "node_id": "MDQ6VXNlcjcyMzc2NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/72376532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarsSu0618", "html_url": "https://github.com/MarsSu0618", "followers_url": "https://api.github.com/users/MarsSu0618/followers", "following_url": "https://api.github.com/users/MarsSu0618/following{/other_user}", "gists_url": "https://api.github.com/users/MarsSu0618/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarsSu0618/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarsSu0618/subscriptions", "organizations_url": "https://api.github.com/users/MarsSu0618/orgs", "repos_url": "https://api.github.com/users/MarsSu0618/repos", "events_url": "https://api.github.com/users/MarsSu0618/events{/privacy}", "received_events_url": "https://api.github.com/users/MarsSu0618/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe @jplu has an idea!", "Hello @MarsSu0618 \r\n\r\nThe bad news is that it is currently not possible to train an LM from scratch or fine tune it with `.fit()`. The good one is that we are heavily working on it and should be feasible soon.\r\n\r\nSorry for the inconvenience.", "@jplu \r\nSo i can not fine tune Bert MLM model with fit(), right? \r\nBecause i alter pytorch framework(train loop) and it can be work.\r\n\r\nIn addtition, I guess maybe should be divide feature and labels. So I change new tensor as follows:\r\n```python\r\n({'attention_mask': <tf.Tensor: shape=(256,), dtype=int32, numpy=\r\n array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>,\r\n 'input_ids': <tf.Tensor: shape=(256,), dtype=int32, numpy=\r\n array([ 101, 1962, 102, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0], dtype=int32)>,\r\n 'token_type_ids': <tf.Tensor: shape=(256,), dtype=int32, numpy=\r\n array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>},\r\n <tf.Tensor: shape=(256,), dtype=int32, numpy=\r\n array([-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100], dtype=int32)>)\r\n```\r\n\r\nBut Error message is change\r\n```\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_masked_lm_6/bert/pooler/dense/kernel:0', 'tf_bert_for_masked_lm_6/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_masked_lm_6/bert/pooler/dense/kernel:0', 'tf_bert_for_masked_lm_6/bert/pooler/dense/bias:0'] when minimizing the loss.\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/indexed_slices.py:432: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\r\n \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-110-48f551163bd4> in <module>()\r\n 4 \r\n 5 \r\n----> 6 model.fit(batched_tfdataset, epochs=1, verbose=1)\r\n\r\n10 frames\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 971 except Exception as e: # pylint:disable=broad-except\r\n 972 if hasattr(e, \"ag_error_metadata\"):\r\n--> 973 raise e.ag_error_metadata.to_exception(e)\r\n 974 else:\r\n 975 raise\r\n\r\nTypeError: in user code:\r\n\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *\r\n return step_function(self, iterator)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **\r\n outputs = model.train_step(data)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:759 train_step\r\n self.compiled_metrics.update_state(y, y_pred, sample_weight)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:409 update_state\r\n metric_obj.update_state(y_t, y_p, sample_weight=mask)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated\r\n update_op = update_state_fn(*args, **kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/metrics.py:176 update_state_fn\r\n return ag_update_state(*args, **kwargs)\r\n\r\n TypeError: update_state() got multiple values for argument 'sample_weight'\r\n```\r\n\r\nhow to solve the problem, thanks.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,605
1,614
1,614
NONE
null
## The Problem I have been trying to train TFBertForMaskedLM model with tensorflow. But when i use model.fit() always encounter some question.Hope someone can help and propose some solution. ## Reference Paper and sample output The Paper title is "Conditional Bert for Contextual Augmentation". In short, just change type_token_ids to label_ids. if the label of sentence is 5, length is 10 and max_sequence_length = 16. It will process output as follows: ``` input_ids = [101, 523, 791, 3189, 677, 5221, 524, 1920, 686, 102, 0, 0, 0, 0, 0, 0] attention_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0] token_type_ids = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 0, 0, 0, 0, 0, 0] labels = [-100, -100, 791, -100, 677, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100] ``` ## Environment - tensorflow == 2.2.0 - huggingface == 3.5.0 - datasets == 1.1.2 - dataset total label is 5. (1~5) - GPU : GCP P100 * 1 ## Dataset output (max_sequence_length=128, batch_size=1) ```python {'attention_mask': <tf.Tensor: shape=(128,), dtype=int32, numpy= array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'input_ids': <tf.Tensor: shape=(128,), dtype=int32, numpy= array([ 101, 523, 791, 3189, 677, 5221, 524, 1920, 686, 4518, 6240, 103, 2466, 2204, 2695, 100, 519, 5064, 1918, 736, 2336, 520, 103, 2695, 1564, 4923, 8013, 678, 6734, 8038, 8532, 131, 120, 120, 8373, 119, 103, 9989, 103, 8450, 120, 103, 120, 12990, 8921, 8165, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'labels': <tf.Tensor: shape=(128,), dtype=int32, numpy= array([-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 4634, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 4158, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 8429, -100, 119, -100, -100, 100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(128,), dtype=int32, numpy= array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>} ``` ## Model code ```python from transformers import AdamWeightDecay, TFBertForMaskedLM, BertConfig def create_model(): configuration = BertConfig.from_pretrained('bert-base-chinese') model = TFBertForMaskedLM.from_pretrained('bert-base-chinese', config=configuration) model.bert.embeddings.token_type_embeddings = tf.keras.layers.Embedding(5, 768, embeddings_initializer=tf.keras.initializers.TruncatedNormal(stddev=0.02)) return model model = create_model() optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.Mean(), tf.keras.metrics.SparseCategoricalAccuracy('accuracy')] model.compile(optimizer = optimizer, loss = loss, metrics = metrics) model.fit(tf_sms_dataset, epochs=1, verbose=1) ``` ## Warning Message when use TFBertForMaskedLM ``` Some layers from the model checkpoint at bert-base-chinese were not used when initializing TFBertForMaskedLM: ['nsp___cls'] - This IS expected if you are initializing TFBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-chinese. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForMaskedLM for predictions without further training. ``` ## Error Message ``` ValueError Traceback (most recent call last) <ipython-input-42-99b78906fef7> in <module>() 5 model.fit(tf_sms_dataset, 6 epochs=1, ----> 7 verbose=1) 10 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs) 64 def _method_wrapper(self, *args, **kwargs): 65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 66 return method(self, *args, **kwargs) 67 68 # Running inside `run_distribute_coordinator` already. /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 846 batch_size=batch_size): 847 callbacks.on_train_batch_begin(step) --> 848 tmp_logs = train_function(iterator) 849 # Catch OutOfRangeError for Datasets of unknown size. 850 # This blocks until the batch has finished executing. /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 578 xla_context.Exit() 579 else: --> 580 result = self._call(*args, **kwds) 581 582 if tracing_count == self._get_tracing_count(): /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 625 # This is the first call of __call__, so we have to initialize. 626 initializers = [] --> 627 self._initialize(args, kwds, add_initializers_to=initializers) 628 finally: 629 # At this point we know that the initialization is complete (or less /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 504 self._concrete_stateful_fn = ( 505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 506 *args, **kwds)) 507 508 def invalid_creator_scope(*unused_args, **unused_kwds): /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2444 args, kwargs = None, None 2445 with self._lock: -> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2447 return graph_function 2448 /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 2775 2776 self._function_cache.missed.add(call_context_key) -> 2777 graph_function = self._create_graph_function(args, kwargs) 2778 self._function_cache.primary[cache_key] = graph_function 2779 return graph_function, args, kwargs /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2665 arg_names=arg_names, 2666 override_flat_arg_shapes=override_flat_arg_shapes, -> 2667 capture_by_value=self._capture_by_value), 2668 self._function_attributes, 2669 # Tell the ConcreteFunction to clean up its graph once it goes out of /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 979 _, original_func = tf_decorator.unwrap(python_func) 980 --> 981 func_outputs = python_func(*func_args, **func_kwargs) 982 983 # invariant: `func_outputs` contains only Tensors, CompositeTensors, /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 439 # __wrapped__ allows AutoGraph to swap in a converted function. We give 440 # the function a weak reference to itself to avoid a reference cycle. --> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds) 442 weak_wrapped_fn = weakref.ref(wrapped_fn) 443 /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 966 except Exception as e: # pylint:disable=broad-except 967 if hasattr(e, "ag_error_metadata"): --> 968 raise e.ag_error_metadata.to_exception(e) 969 else: 970 raise ValueError: in user code: /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step ** self.trainable_variables) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize trainable_variables)) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients filtered_grads_and_vars = _filter_grads(grads_and_vars) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads ([v.name for _, v in grads_and_vars],)) ValueError: No gradients provided for any variable: ['tf_bert_for_masked_lm_2/bert/embeddings/word_embeddings/weight:0', 'tf_bert_for_masked_lm_2/bert/embeddings/position_embeddings/embeddings:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/gamma:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/beta:0', 'tf_bert_for_masked_lm_2/bert/embeddings/embedding_1/embeddings:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/query/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/query/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/key/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/key/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/value/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/value/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/dense/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/dense/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/LayerNorm/gamma:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/LayerNorm/beta:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/intermediate/dense/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/intermediate/dense/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/output/dense/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/output/dense/bias:0', 'tf_bert_f... ``` Have Someone can help. I will thanks a lot. ## Other Test I used english sentence to test. example as follows: ```python from transformers import TFBertForMaskedLM, BertConfig def create_model(): configuration = BertConfig.from_pretrained('bert-base-uncased') model = TFBertForMaskedLM.from_pretrained('bert-base-uncased', config=configuration) return model model = create_model() eng_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') token_info = eng_tokenizer(text="We are very happy to show you the 🤗 Transformers library.", padding='max_length', max_length=20) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.Mean(), tf.keras.metrics.SparseCategoricalAccuracy("acc")] dataset = tf.data.Dataset.from_tensor_slices(dict(token_info)) dataset = dataset.batch(1).prefetch(tf.data.experimental.AUTOTUNE) model.compile(optimizer = optimizer, loss = model.compute_loss, metrics = metrics) model.fit(dataset) ``` token_info output dataset ``` { 'input_ids': [101, 2057, 2024, 2200, 103, 2000, 2265, 2017, 103, 100, 19081, 3075, 1012, 102, 0, 0, 0, 0, 0, 0] 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0] 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 'labels': [-100, -100, -100, -100, 3407, -100, -100, -100, 1996, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100] } ``` Get same error..... ``` ValueError: in user code: /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step ** self.trainable_variables) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize trainable_variables)) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients filtered_grads_and_vars = _filter_grads(grads_and_vars) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads ([v.name for _, v in grads_and_vars],)) ValueError: No gradients provided for any variable: ['tf_bert_for_masked_lm_2/bert/embeddings/word_embeddings/weight:0', 'tf_bert_for_masked_lm_2/bert/embeddings/position_embeddings/embeddings:0', 'tf_bert_for_masked_lm_2/bert/embeddings/token_type_embeddings/embeddings:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/gamma:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/beta:0', ``` I'm not sure if there is a problem with the integration of fit() into the model?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8510/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8509
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8509/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8509/comments
https://api.github.com/repos/huggingface/transformers/issues/8509/events
https://github.com/huggingface/transformers/pull/8509
741,920,203
MDExOlB1bGxSZXF1ZXN0NTIwMTUxMjc3
8,509
Model templates encoder only
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you both for your reviews!" ]
1,605
1,605
1,605
MEMBER
null
Only merge the encoder part, not the encoder-decoder part of #7636. Will work on a decoder in the future. Applied your comments @sgugger @patrickvonplaten, but opened a new PR on a new branch so that we can keep the old one for reference when integrating the encoder-decoder model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8509/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8509", "html_url": "https://github.com/huggingface/transformers/pull/8509", "diff_url": "https://github.com/huggingface/transformers/pull/8509.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8509.patch", "merged_at": 1605286771000 }
https://api.github.com/repos/huggingface/transformers/issues/8508
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8508/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8508/comments
https://api.github.com/repos/huggingface/transformers/issues/8508/events
https://github.com/huggingface/transformers/issues/8508
741,916,532
MDU6SXNzdWU3NDE5MTY1MzI=
8,508
TPU issue: possible memory leak in eval loop
{ "login": "zcain117", "id": 14796584, "node_id": "MDQ6VXNlcjE0Nzk2NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zcain117", "html_url": "https://github.com/zcain117", "followers_url": "https://api.github.com/users/zcain117/followers", "following_url": "https://api.github.com/users/zcain117/following{/other_user}", "gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}", "starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zcain117/subscriptions", "organizations_url": "https://api.github.com/users/zcain117/orgs", "repos_url": "https://api.github.com/users/zcain117/repos", "events_url": "https://api.github.com/users/zcain117/events{/privacy}", "received_events_url": "https://api.github.com/users/zcain117/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem is that you are aggregating all your predictions on the TPU host, with a big evaluation set. You should use the `eval_accumulation_steps` argument to pass the predictions back to the CPU every, let's say 20 evaluation steps for instance to avoid the OOM.", "Thanks for the response!\r\nI started a version of the workload that uses that flag and I'll update here once it finishes the training loop", "With that flag, I don't get the same OOM error. Instead I see:\r\n```\r\nE 2020-11-13T05:36:28.200219317Z 11/13/2020 05:36:28 - INFO - run_glue - *** Evaluate ***\r\nE 2020-11-13T05:36:28.201262406Z [INFO|trainer.py:388] 2020-11-13 05:36:28,200 >> The following columns in the evaluation set don't have a corresponding argument in `XLNetForSequenceClassification.forward` and have been ignored: premise, hypothesis, idx.\r\nE 2020-11-13T05:36:28.205409874Z [INFO|trainer.py:1387] 2020-11-13 05:36:28,204 >> ***** Running Evaluation *****\r\nE 2020-11-13T05:36:28.205583892Z [INFO|trainer.py:1388] 2020-11-13 05:36:28,205 >> Num examples = 9815\r\nE 2020-11-13T05:36:28.205718259Z [INFO|trainer.py:1389] 2020-11-13 05:36:28,205 >> Batch size = 32\r\nE 2020-11-13T05:43:14.914374736Z \r\n 0%| | 0/39 [00:00<?, ?it/s]\r\n 5%|5 | 2/39 [00:10<03:14, 5.26s/it]\r\n 8%|7 | 3/39 [00:21<04:09, 6.92s/it]\r\n 10%|# | 4/39 [00:31<04:41, 8.04s/it]\r\n 13%|#2 | 5/39 [00:42<05:02, 8.89s/it]\r\n 15%|#5 | 6/39 [00:53<05:13, 9.51s/it]\r\n 18%|#7 | 7/39 [01:05<05:21, 10.04s/it]\r\n 21%|## | 8/39 [01:16<05:20, 10.34s/it]\r\n 23%|##3 | 9/39 [01:27<05:19, 10.64s/it]\r\n 26%|##5 | 10/39 [01:38<05:10, 10.71s/it]Exception in device=TPU:0: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Sent message larger than max (1342183400 vs. 1073741824) (8)\r\nE 2020-11-13T05:43:14.914454025Z Traceback (most recent call last):\r\nE 2020-11-13T05:43:14.914462893Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 329, in _mp_start_fn\r\nE 2020-11-13T05:43:14.914469141Z _start_fn(index, pf_cfg, fn, args)\r\nE 2020-11-13T05:43:14.914474634Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 323, in _start_fn\r\nE 2020-11-13T05:43:14.914481036Z fn(gindex, *args)\r\nE 2020-11-13T05:43:14.914486906Z File \"/transformers/examples/text-classification/run_glue.py\", line 414, in _mp_fn\r\nE 2020-11-13T05:43:14.914495083Z main()\r\nE 2020-11-13T05:43:14.914623679Z File \"/transformers/examples/text-classification/run_glue.py\", line 370, in main\r\nE 2020-11-13T05:43:14.914648860Z eval_result = trainer.evaluate(eval_dataset=eval_dataset)\r\nE 2020-11-13T05:43:14.914657065Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 1313, in evaluate\r\nE 2020-11-13T05:43:14.914667922Z prediction_loss_only=True if self.compute_metrics is None else None,\r\nE 2020-11-13T05:43:14.914675010Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 1431, in prediction_loop\r\nE 2020-11-13T05:43:14.914681724Z preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, \"eval_preds\"))\r\nE 2020-11-13T05:43:14.914712087Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 1474, in _gather_and_numpify\r\nE 2020-11-13T05:43:14.914718679Z tensors = nested_xla_mesh_reduce(tensors, name)\r\nE 2020-11-13T05:43:14.914724791Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in nested_xla_mesh_reduce\r\nE 2020-11-13T05:43:14.914731470Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-13T05:43:14.914737871Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in <genexpr>\r\nE 2020-11-13T05:43:14.914744687Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-13T05:43:14.914751282Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in nested_xla_mesh_reduce\r\nE 2020-11-13T05:43:14.914761474Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-13T05:43:14.914768115Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in <genexpr>\r\nE 2020-11-13T05:43:14.914774306Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-13T05:43:14.914780896Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 113, in nested_xla_mesh_reduce\r\nE 2020-11-13T05:43:14.914788363Z return xm.mesh_reduce(name, tensors, torch.cat)\r\nE 2020-11-13T05:43:14.914794375Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py\", line 909, in mesh_reduce\r\nE 2020-11-13T05:43:14.914801139Z xdata = rendezvous(tag, bio.getvalue())\r\nE 2020-11-13T05:43:14.914806782Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py\", line 861, in rendezvous\r\nE 2020-11-13T05:43:14.914813625Z return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)\r\nE 2020-11-13T05:43:14.914819959Z RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Sent message larger than max (1342183400 vs. 1073741824) (8)\r\nE 2020-11-13T05:43:15.468075089Z \r\n 26%|##5 | 10/39 [02:11<06:20, 13.12s/it]\r\n```\r\n\r\nI'll try some things on my side. It looks like the accumulation was fine for \"eval_losses\" but then failed on \"eval_preds\". I will just try a more frequent eval accumulation and a smaller batch size and see if that results in a smaller message being sent between TPU/CPU", "It still looks like a problem of memory (from the `Sent message larger than max` in the stack trace). Maybe try a lower `eval_accumulation_step`?\r\n\r\nMaybe we should move those tensors to the CPU before doing the mesh reduce to save a bit of host memory (right now they are reduced on all hosts *then* moved).", "I have a version running now with half the accumulation size and half the eval batch size.\r\n\r\nMemory saving on device is probably always good but in this case it seems to be complaining about the size of the transfer payload. If you don't reduce before moving, probably the size of the transfer would be even bigger", "I tried with `--eval_accumulation_steps 5` instead of 10 and `--per_device_eval_batch_size 16` instead of 32 and ran into:\r\n\r\n`Exception in device=TPU:4: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Received message larger than max (335550440 vs. 4194304) (8)`\r\n\r\nThe 335550440 number is much less than the previous error message's larger number 1342183400. I will try `--eval_accumulation_steps 1` just in case but I'm wondering if this error means something else than what I was assuming", "`eval_accumulation_steps 1` resulted in the same error:\r\n\r\n```\r\nE 2020-11-17T00:27:24.619933766Z main()\r\nE 2020-11-17T00:27:24.619937169Z File \"/transformers/examples/text-classification/run_glue.py\", line 370, in main\r\nE 2020-11-17T00:27:24.619940804Z eval_result = trainer.evaluate(eval_dataset=eval_dataset)\r\nE 2020-11-17T00:27:24.619944189Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 1313, in evaluate\r\nE 2020-11-17T00:27:24.619947752Z prediction_loss_only=True if self.compute_metrics is None else None,\r\nE 2020-11-17T00:27:24.619951181Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 1431, in prediction_loop\r\nE 2020-11-17T00:27:24.619954905Z preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, \"eval_preds\"))\r\nE 2020-11-17T00:27:24.619958638Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 1474, in _gather_and_numpify\r\nE 2020-11-17T00:27:24.619962458Z tensors = nested_xla_mesh_reduce(tensors, name)\r\nE 2020-11-17T00:27:24.619965855Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in nested_xla_mesh_reduce\r\nE 2020-11-17T00:27:24.619976695Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-17T00:27:24.619980624Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in <genexpr>\r\nE 2020-11-17T00:27:24.619984750Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-17T00:27:24.619988344Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in nested_xla_mesh_reduce\r\nE 2020-11-17T00:27:24.619992533Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-17T00:27:24.619996086Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 112, in <genexpr>\r\nE 2020-11-17T00:27:24.619999738Z return type(tensors)(nested_xla_mesh_reduce(t, f\"{name}_{i}\") for i, t in enumerate(tensors))\r\nE 2020-11-17T00:27:24.620003216Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 113, in nested_xla_mesh_reduce\r\nE 2020-11-17T00:27:24.620006752Z return xm.mesh_reduce(name, tensors, torch.cat)\r\nE 2020-11-17T00:27:24.620010015Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py\", line 909, in mesh_reduce\r\nE 2020-11-17T00:27:24.620013568Z xdata = rendezvous(tag, bio.getvalue())\r\nE 2020-11-17T00:27:24.620016833Z File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py\", line 861, in rendezvous\r\nE 2020-11-17T00:27:24.620020510Z return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)\r\nE 2020-11-17T00:27:24.620024011Z RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Received message larger than max (67114984 vs. 4194304) (8)\r\n```", "It may be linked to the issue of XLNet outputing its memories on top of the logits (there is a PR under review to fix that).", "That sounds plausible since this issue is only affecting xlnet and none of our other tests.\r\n\r\nIs this the right PR: https://github.com/huggingface/transformers/pull/8567 ?", "Yes this PR will fix that, but current v4 release candidate should have another fix on the `Trainer` side (which basically ignores some of the keys in the model outputs).", "Looks like #8567 was submitted and now our xlnet test started passing. Thank you!", "Glad to hear it's fixed your issue :-) " ]
1,605
1,606
1,606
CONTRIBUTOR
null
I am running into a HBM OOM during the eval loop of xlnet (`--model_name_or_path xlnet-large-cased`) when running on TPUs. No matter which batch size I use, the behavior is the same: 1. training loop succeeds 2. eval loop starts, makes it about halfway, then the TPU runs out of HBM memory and eval loop dies All the other models that we test are OK. The `xlnet-large-cased` test last passed on 2020-09-14. Since this is unrelated to batch size, I thought maybe there is a memory leak on the TPU. I think the eval loop is the more likely culprit than the training loop since the only OOM happens during eval. Here are the last few lines of output before oom: ``` E 2020-11-12T04:51:27.984001264Z Saving model checkpoint to MNLI E 2020-11-12T04:51:27.989368910Z Configuration saved in MNLI/config.json E 2020-11-12T04:51:40.438957029Z Model weights saved in MNLI/pytorch_model.bin E 2020-11-12T04:51:40.535782031Z 11/12/2020 04:51:40 - INFO - run_glue - *** Evaluate *** E 2020-11-12T04:51:40.536480018Z The following columns in the evaluation set don't have a corresponding argument in `XLNetForSequenceClassification.forward` and have been ignored: idx, hypothesis, premise. E 2020-11-12T04:51:40.540513400Z ***** Running Evaluation ***** E 2020-11-12T04:51:40.540566285Z Num examples = 9815 E 2020-11-12T04:51:40.540575559Z Batch size = 8 E 2020-11-12T05:11:26.995136217Z 0%| | 0/154 [00:00<?, ?it/s] 1%|1 | 2/154 [00:11<14:01, 5.53s/it] 2%|1 | 3/154 [00:22<18:15, 7.25s/it] ... 49%|####9 | 76/154 [14:34<15:49, 12.17s/it] 50%|##### | 77/154 [14:48<16:25, 12.80s/it]2020-11-12 05:11:26.994477: E 511 tensorflow/compiler/xla/xla_client/xla_util.cc:76] >>> Dumping Computation 0 ``` I'm not sure what the issue could be. It seems like both the [training loop](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L743) and the [eval loop](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1409) are using `ParallelLoader`, which should call `xm.mark_step` for [every call to `next`](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/parallel_loader.py#L37). Does anyone else have any ideas what could be happening? ## Environment info - `transformers` version: 3.5.0 - Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13 - Python version: 3.6.10 - PyTorch version (GPU?): 1.8.0a0+d0df29a (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes - Using TPU in script?: Yes ### Who can help @sgugger @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: MNLI * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. git clone https://github.com/huggingface/transformers.git 2. cd transformers && pip install . 3. pip install datasets 4. Training command: ``` python examples/xla_spawn.py \ --num_cores 8 \ examples/text-classification/run_glue.py \ --logging_dir=./tensorboard-metrics \ --task_name MNLI \ --cache_dir ./cache_dir \ --do_train \ --do_eval \ --num_train_epochs 3 \ --max_seq_length 128 \ --learning_rate 3e-5 \ --output_dir MNLI \ --overwrite_output_dir \ --logging_steps 100 \ --save_steps 3000 \ --overwrite_cache \ --tpu_metrics_debug \ --model_name_or_path xlnet-large-cased \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 8 ``` ## Expected behavior Eval loop finishes without TPU OOM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8508/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8507
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8507/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8507/comments
https://api.github.com/repos/huggingface/transformers/issues/8507/events
https://github.com/huggingface/transformers/issues/8507
741,894,466
MDU6SXNzdWU3NDE4OTQ0NjY=
8,507
Fill-mask pipeline removes space after token prediction when loading pre-training model based on roberta-base
{ "login": "ironflood", "id": 11771531, "node_id": "MDQ6VXNlcjExNzcxNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/11771531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ironflood", "html_url": "https://github.com/ironflood", "followers_url": "https://api.github.com/users/ironflood/followers", "following_url": "https://api.github.com/users/ironflood/following{/other_user}", "gists_url": "https://api.github.com/users/ironflood/gists{/gist_id}", "starred_url": "https://api.github.com/users/ironflood/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ironflood/subscriptions", "organizations_url": "https://api.github.com/users/ironflood/orgs", "repos_url": "https://api.github.com/users/ironflood/repos", "events_url": "https://api.github.com/users/ironflood/events{/privacy}", "received_events_url": "https://api.github.com/users/ironflood/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Could this be reopened. I'm facing the same issue.", "@sk- Same. Still getting this with `distilroberta-base`", "Could you solve it? ", "Hi @Bachstelze, if you're still experiencing this problem, could you open a new issue? Since the original issue is old and there have been many changes to the modeling and tokenization code we can't be sure the same thing is being addressed. " ]
1,605
1,706
1,611
NONE
null
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-4.15.0-122-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help @mfuntowicz ## Information Model I am using (Bert, XLNet ...): **roberta-base** The problem arises when using: * [x] the official example scripts: -> continued **pre-training roberta-base** using the v3.5.0 examples/language-modeling/**run_mlm_wwm.py** * [x] my own modified scripts: To test the model during training I simply instantiate a pipeline class with target pretraining checkpoint folder and input masked strings to check the probabilities: ``` unmasker = pipeline('fill-mask', model=model_checkpoint_path) results = unmasker(masked_text) print(json.dumps(results, indent=4)) ``` ## Expected behavior The expected behavior for input string "The goal of MASK is happiness." if loading model "roberta-base" would be: [ { "sequence": "The goal of life is happiness.", "score": 0.07787031680345535, "token": 301, "token_str": "\u0120life" }, { "sequence": "The goal of meditation is happiness.", "score": 0.040741581469774246, "token": 20183, "token_str": "\u0120meditation" } ] ## Observed behavior For the same input string I obtain no space following the predicted token when loading the further pre-trained model from a checkpoint folder, example result: [ { "sequence": "The goal of Kiwis happiness.", "score": 0.11430764198303223, "token": 21472, "token_str": "\u0120Kiw" }, { "sequence": "The goal of anis happiness.", "score": 0.04334629327058792, "token": 41, "token_str": "\u0120an" }, { "sequence": "The goal of buis happiness.", "score": 0.03720756620168686, "token": 10306, "token_str": "\u0120bu" } ] As I thought it might be a tokenizer issue from the checkpoint, I tried to specify one from the roberta-base model used to continue the pre-training from. It solves the issue, so it seems like the pre-training steps have corrupted the tokenizer as loaded from the checkpoint. The results I get after 200,000 steps of additional pretraining from roberta-base: [ { "sequence": "The goal of this is happiness.", "score": 0.28572556376457214, "token": 42, "token_str": "\u0120this" }, { "sequence": "The goal of it is happiness.", "score": 0.10664933174848557, "token": 24, "token_str": "\u0120it" }, { "sequence": "The goal of all is happiness.", "score": 0.07055338472127914, "token": 70, "token_str": "\u0120all" }, { "sequence": "The goal of life is happiness.", "score": 0.056005414575338364, "token": 301, "token_str": "\u0120life" } ] -> so regardless of the quality of the result token prediciton, loading the roberta-base original tokenizer solves the issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8507/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8506
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8506/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8506/comments
https://api.github.com/repos/huggingface/transformers/issues/8506/events
https://github.com/huggingface/transformers/issues/8506
741,855,259
MDU6SXNzdWU3NDE4NTUyNTk=
8,506
DPR model: FileNotFoundError: Couldn't find file
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting !\r\nIndeed the index was renamed recently. I fixed it, it should be good now", "Got it. Upon re-trying the code, it works fine. " ]
1,605
1,605
1,605
CONTRIBUTOR
null
I am using DPR model: ```python from transformers import DPRQuestionEncoderTokenizer, DPRQuestionEncoder from datasets import load_dataset question_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base') question_encoder = DPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base') wiki = load_dataset("wiki_dpr", with_embeddings=False, with_index=True, split="train") def get_top(question, topk=5): question_emb = question_encoder(**question_tokenizer(question, return_tensors="pt"))[0].detach().numpy() passages_scores, passages = wiki.get_nearest_examples("embeddings", question_emb, k=topk) all_passgae = "" for score, title, text in zip(passages_scores, passages['title'], passages['text']): if len(all_passgae.split(" ")) < 450: all_passgae += f" ({title}) {text}" return all_passgae get_top("who was the first US president?") ``` This was working until last week. However, now when running it I am getting the following error: ``` Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 1.65MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 493/493 [00:00<00:00, 443kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 438M/438M [00:05<00:00, 73.8MB/s] Downloading: 7.91kB [00:00, 7.04MB/s] Downloading: 21.9kB [00:00, 19.5MB/s] Using custom data configuration psgs_w100.no_embeddings.compressed Downloading and preparing dataset wiki_dpr/psgs_w100.no_embeddings.compressed (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/danielk/.cache/huggingface/datasets/wiki_dpr/psgs_w100.no_embeddings.compressed/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.54k/1.54k [00:00<00:00, 1.66MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13.8G/13.8G [03:41<00:00, 62.3MB/s] Traceback (most recent call last): File "2.create_tasks.py", line 9, in <module> wiki = load_dataset("wiki_dpr", with_embeddings=False, with_index=True, split="train") File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/builder.py", line 468, in download_and_prepare self._download_prepared_from_hf_gcs() File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/builder.py", line 507, in _download_prepared_from_hf_gcs resource_path = utils.cached_path(remote_cache_dir + "/" + resource_file_name) File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 474, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wiki_dpr/psgs_w100.no_embeddings.compressed/0.0.0/psgs_w100.nq.IVFPQ4096_HNSW32_PQ64-IP-train.faiss ``` I wonder if the changes made to the model-hub has anything to do with this. @LysandreJik @lhoestq @julien-c Here is my environment, for completeness: ``` Python 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] on linux ``` and ``` tensorboard 2.4.0 tensorboard-plugin-wit 1.7.0 tensorboardX 2.1 tensorflow 2.3.1 tensorflow-datasets 4.1.0 tensorflow-estimator 2.3.0 tensorflow-metadata 0.25.0 tensorflow-text 2.3.0 termcolor 1.1.0 tfds-nightly 4.1.0.dev202011120108 threadpoolctl 2.1.0 tokenizers 0.9.3 torch 1.6.0 tqdm 4.49.0 transformers 3.5.0 typing-extensions 3.7.4.3 urllib3 1.26.1 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8506/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8505
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8505/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8505/comments
https://api.github.com/repos/huggingface/transformers/issues/8505/events
https://github.com/huggingface/transformers/issues/8505
741,855,059
MDU6SXNzdWU3NDE4NTUwNTk=
8,505
Unexpected behavior when using PubMedBERT with AutoModelForMaskedLM
{ "login": "rahuln", "id": 3958904, "node_id": "MDQ6VXNlcjM5NTg5MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3958904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rahuln", "html_url": "https://github.com/rahuln", "followers_url": "https://api.github.com/users/rahuln/followers", "following_url": "https://api.github.com/users/rahuln/following{/other_user}", "gists_url": "https://api.github.com/users/rahuln/gists{/gist_id}", "starred_url": "https://api.github.com/users/rahuln/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rahuln/subscriptions", "organizations_url": "https://api.github.com/users/rahuln/orgs", "repos_url": "https://api.github.com/users/rahuln/repos", "events_url": "https://api.github.com/users/rahuln/events{/privacy}", "received_events_url": "https://api.github.com/users/rahuln/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I know it's an old issue but I just came across this page.\r\n\r\nThe original PubMedBERT checkpoint didn't have the mask prediction heads, but we updated the checkpoint ~10 months ago \r\nhttps://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract\r\n\r\nAlso, we have a new biomed+clinical domain-specific model if you're interested: https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general\r\n\r\n@rahuln " ]
1,605
1,658
1,611
CONTRIBUTOR
null
## Information I'm getting some strange behavior when using `AutoModelForMaskedLM` with PubMedBERT to impute masked tokens. The screenshot below shows a simple example where I would expect PubMedBERT to give reasonable values, but the suggested tokens are really strange. As shown just below this, `bert-base-uncased` seems to behave reasonably. <img width="1013" alt="Screen Shot 2020-11-12 at 10 28 13 AM" src="https://user-images.githubusercontent.com/3958904/98984623-4e96b400-24d7-11eb-9a5f-faac70b1b399.png"> Same code as above, in text: ```python import torch from transformers import AutoTokenizer, AutoModelForMaskedLM model_name = 'microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForMaskedLM.from_pretrained(model_name).to('cuda') text = f'Heart disease is {tokenizer.mask_token} leading cause of death in the United States.' tokenized = tokenizer(text, return_tensors='pt').to('cuda') print(tokenizer.convert_ids_to_tokens(tokenized.input_ids.squeeze())) output = model(**tokenized, return_dict=True) output.logits.size() print(tokenizer.convert_ids_to_tokens(torch.topk(output.logits[0, 4, :], 10).indices)) model_name = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForMaskedLM.from_pretrained(model_name).to('cuda') tokenized = tokenizer(text, return_tensors='pt').to('cuda') print(tokenizer.convert_ids_to_tokens(tokenized.input_ids.squeeze())) output = model(**tokenized, return_dict=True) print(tokenizer.convert_ids_to_tokens(torch.topk(output.logits[0, 4, :], 10).indices)) ``` ## Environment info - `transformers` version: 3.3.1 - Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8505/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8504
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8504/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8504/comments
https://api.github.com/repos/huggingface/transformers/issues/8504/events
https://github.com/huggingface/transformers/issues/8504
741,830,285
MDU6SXNzdWU3NDE4MzAyODU=
8,504
Failed to push model repo
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false } ]
[ "And `git-lfs` is already installed in my local.\r\n```bash\r\n$ git lfs install\r\nUpdated git hooks.\r\nGit LFS initialized.\r\n```\r\n\r\nWhat should I do?", "Well, it's magical. It works if I clone it again. But this time is different from the first time.\r\n\r\n- The first time, the model file seems not complete.\r\n```bash\r\n/data2/wiki_zh ⌚ 1:44:35\r\n$ git clone https://huggingface.co/mymusise/gpt2-medium-chinese\r\nCloning into 'gpt2-medium-chinese'...\r\nremote: Enumerating objects: 15, done.\r\nremote: Counting objects: 100% (15/15), done.\r\nremote: Compressing objects: 100% (14/14), done.\r\nremote: Total 15 (delta 3), reused 0 (delta 0), pack-reused 0\r\nUnpacking objects: 100% (15/15), done.\r\n\r\n/data2/wiki_zh ⌚ 1:44:50\r\n$ cd gpt2-medium-chinese\r\n\r\n/data2/wiki_zh/gpt2-medium-chinese on  main ⌚ 1:44:56\r\n$ ls\r\nconfig.json tf_model.h5 vocab.txt\r\n\r\n/data2/wiki_zh/gpt2-medium-chinese on  main ⌚ 1:44:56\r\n$ ls -lh\r\ntotal 44K\r\n-rw-rw-r-- 1 mymusise mymusise 849 11月 13 01:44 config.json\r\n-rw-rw-r-- 1 mymusise mymusise 135 11月 13 01:44 tf_model.h5\r\n-rw-rw-r-- 1 mymusise mymusise 35K 11月 13 01:44 vocab.txt\r\n```\r\n\r\n- When I clone model repo again, the model file seems complete.\r\n\r\n```bash\r\n/data2/wiki_zh ⌚ 2:40:12\r\n$ rm -rf gpt2-medium-chinese\r\n\r\n/data2/wiki_zh ⌚ 2:40:16\r\n$ git clone https://huggingface.co/mymusise/gpt2-medium-chinese \r\nCloning into 'gpt2-medium-chinese'...\r\nremote: Enumerating objects: 15, done.\r\nremote: Counting objects: 100% (15/15), done.\r\nremote: Compressing objects: 100% (14/14), done.\r\nremote: Total 15 (delta 3), reused 0 (delta 0), pack-reused 0\r\nUnpacking objects: 100% (15/15), done.\r\n\r\n/data2/wiki_zh/gpt2-medium-chinese on  main ⌚ 2:43:08\r\n$ ls -lh \r\ntotal 1.2G\r\n-rw-rw-r-- 1 mymusise mymusise 849 11月 13 02:40 config.json\r\n-rw-rw-r-- 1 mymusise mymusise 1.2G 11月 13 02:42 tf_model.h5\r\n-rw-rw-r-- 1 mymusise mymusise 35K 11月 13 02:40 vocab.txt\r\n```\r\n\r\nThen I can push model successfully when I update the model file.", "I think, the reason why I push fails may be because I haven't added git-lfs before I `git add` the model file for the first time. `git lfs install` may not work after adding a big model file.", "Yes, you need to run `git lfs install` before adding files. I'll make that clearer in the documentation", "alternatively your can use ``` git lfs migrate import --everything ``` even before adding file without lfs. This will reindex files and let you push them using git lfs", "will add this to my upcoming video about `git-lfs` @jqueguiner ❤️ ", "I have to admit I'm lazy sometimes\r\n![image](https://user-images.githubusercontent.com/690878/118976843-c9082b00-b975-11eb-9274-650595fd419d.png)\r\n", "Maybe this one could help (for future searchers ;) )\r\n```\r\nhuggingface-cli lfs-enable-largefiles\r\n```\r\nI had the same problem and was curious why it's working from Trainer that uses huggingface_hub\r\nand found that need to run \"huggingface-cli lfs-enable-largefiles\"\r\nhttps://github.com/huggingface/huggingface_hub/blob/2e81cf3ec04b0dd5ce2acc92d25f8261a8484f3e/src/huggingface_hub/commands/lfs.py#L45\r\n```\r\nThis should be executed once for each model repo that contains a model file >5GB. It's documented in the error\r\n message you get if you just try to git push a 5GB file without having enabled it before.\r\n```" ]
1,605
1,643
1,605
CONTRIBUTOR
null
Hi, when I upload my model to hub as the [new document](https://huggingface.co/transformers/model_sharing.html) say, I got this error: ``` Delta compression using up to 16 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (4/4), 1.07 GiB | 3.08 MiB/s, done. Total 4 (delta 0), reused 1 (delta 0) remote: remote: ------------------------------------------------------------------------- remote: Your push was rejected because it contains files larger than 10M. remote: Please use https://git-lfs.github.com/ to store larger files. remote: ------------------------------------------------------------------------- remote: remote: Offending files: remote: - tf_model.h5 (ref: refs/heads/main) To https://huggingface.co/mymusise/gpt2-medium-chinese ! [remote rejected] main -> main (pre-receive hook declined) error: failed to push some refs to 'https://huggingface.co/mymusise/gpt2-medium-chinese' ``` ## Environment info - `transformers` version: 3.5.0 - Platform: Ubuntu 18.04
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8504/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8503
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8503/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8503/comments
https://api.github.com/repos/huggingface/transformers/issues/8503/events
https://github.com/huggingface/transformers/issues/8503
741,786,838
MDU6SXNzdWU3NDE3ODY4Mzg=
8,503
Training the TFGPT2LMHeadModel with model.fit produces error
{ "login": "bjourne", "id": 142475, "node_id": "MDQ6VXNlcjE0MjQ3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/142475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bjourne", "html_url": "https://github.com/bjourne", "followers_url": "https://api.github.com/users/bjourne/followers", "following_url": "https://api.github.com/users/bjourne/following{/other_user}", "gists_url": "https://api.github.com/users/bjourne/gists{/gist_id}", "starred_url": "https://api.github.com/users/bjourne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bjourne/subscriptions", "organizations_url": "https://api.github.com/users/bjourne/orgs", "repos_url": "https://api.github.com/users/bjourne/repos", "events_url": "https://api.github.com/users/bjourne/events{/privacy}", "received_events_url": "https://api.github.com/users/bjourne/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false } ]
[ "Hello!\r\n\r\nYour way of computing the loss is wrong. I suggest you to look at how we compute it [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_gpt2.py#L650) and [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L125). You should also rewrite your metric the same way.", "I know but changing the loss metric won't fix the ValueError.", "It is. Just look at how we do it. Including in the tests.", "Met the same error before, it works for me with removing the `metrics` params when `model.compile`. But I think it's a bad idea.", "Hi, there. I found it won't raise this error if `batch size` == `n_head`. For example, LysandreJik's [gits](https://gist.github.com/LysandreJik/c958925768eb6a9a72609ea99561d1cb) works only `BATCH_SIZE = 12`", "~~Hey, guys. If the output of other layers was used rarely, can we add a controller to select whether return multi-layer logits or not? (correct me if it's necessary to return multi-layer logits.)~~\r\n\r\n~~If we can do this, this [pull request](https://github.com/huggingface/transformers/pull/8584) may help.~~", "@bjourne Hey guy, I think adding `output_attentions=False` and `output_hidden_states=False` may help:\r\n```\r\nmodel = TFGPT2LMHeadModel(GPT2Config(output_attentions=False, output_hidden_states=False, use_cache=False))\r\n```", "I haven't had the chance to try that. Maybe someone else can? Will the trained network with `output_hidden_states=True` though? You need to set it to True when generating text.", "Hello,\r\n\r\nI got the same error.\r\nWhen I tried to set `output_attentions = False`, it didn't seem to be in GPT2Config.\r\nLooking at the [documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2config), GPT2Config doesn't have that parameter, where should I set it?", "> Hello,\r\n> \r\n> I got the same error.\r\n> When I tried to set `output_attentions = False`, it didn't seem to be in GPT2Config.\r\n> Looking at the [documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2config), GPT2Config doesn't have that parameter, where should I set it?\r\n\r\n`output_attentions` is a Parameter of `PretrainedConfig` which is the SuperClass of `GPT2Config`, [see](https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig) ", "Thank you for your reply.\r\n\r\n> PretrainedConfig which is the SuperClass of GPT2Config\r\n\r\nI see, I understand.\r\nI noticed that the version of the transformer I'm using was out of date :cry: \r\n\r\n> GPT2Config(output_attentions=False, output_hidden_states=False, use_cache=False)\r\n\r\nI used the latest version and it worked without any errors!\r\nYour advice was very helpful, thank you!", "This issue has been stale for 1 month." ]
1,605
1,618
1,618
NONE
null
MWE: ```python from transformers.modeling_tf_gpt2 import TFGPT2LMHeadModel from transformers.configuration_gpt2 import GPT2Config from tensorflow.data import Dataset import tensorflow as tf data = tf.random.uniform(shape=[10000], dtype = tf.int32, maxval = 100) src = tf.constant(data) def split_input_target(chunk): return chunk[:-1], chunk[1:] ds = Dataset.from_tensor_slices(src) \ .batch(256 + 1, drop_remainder = True) \ .map(split_input_target) model = TFGPT2LMHeadModel(GPT2Config()) loss = ['sparse_categorical_crossentropy'] + [None] * 12 model.compile(loss = loss, metrics = ['sparse_categorical_accuracy']) model.fit(ds) ``` This generates the error `ValueError: Dimensions must be equal, but are 256 and 12 for '{{node Equal_1}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_6, Cast_7)' with input shapes: [256,1], [2,256,12,1].` Is there any explanation somewhere on how to train like this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8503/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8502
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8502/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8502/comments
https://api.github.com/repos/huggingface/transformers/issues/8502/events
https://github.com/huggingface/transformers/issues/8502
741,736,946
MDU6SXNzdWU3NDE3MzY5NDY=
8,502
TF T5-small with output hidden state and attention not owrking
{ "login": "pathikchamaria", "id": 51734126, "node_id": "MDQ6VXNlcjUxNzM0MTI2", "avatar_url": "https://avatars.githubusercontent.com/u/51734126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pathikchamaria", "html_url": "https://github.com/pathikchamaria", "followers_url": "https://api.github.com/users/pathikchamaria/followers", "following_url": "https://api.github.com/users/pathikchamaria/following{/other_user}", "gists_url": "https://api.github.com/users/pathikchamaria/gists{/gist_id}", "starred_url": "https://api.github.com/users/pathikchamaria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pathikchamaria/subscriptions", "organizations_url": "https://api.github.com/users/pathikchamaria/orgs", "repos_url": "https://api.github.com/users/pathikchamaria/repos", "events_url": "https://api.github.com/users/pathikchamaria/events{/privacy}", "received_events_url": "https://api.github.com/users/pathikchamaria/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello!\r\n\r\nUnfortunately this is a known bug we have with few of the TF models. We are currently reworking all the TF models to solve this issue among others.", "@jplu I tried the same thing with Pytorch model also. It is also giving error. Any idea if I can get the attentions with pytorch?", "You get the same error with PyTorch? For PyTorch I will let @patrickvonplaten take the lead to help you, he knows better than me.", "The error is the same instead of list it just says tuple", "Hey @pathikchamaria - is it possible to update your version? 2.11 is very outdated by now. Could you try again with the current version of transformers (3.5) ? ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@jplu are there any updates on this on the tensorflow side?" ]
1,605
1,620
1,610
NONE
null
- `transformers` version: 2.11 - Platform: Multiple - Python version: multiple ### Who can help T5: @patrickvonplaten tensorflow: @jplu ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when t5-small is loaded from pretrain with output_hidden_states=True, output_attentions=True sample script https://colab.research.google.com/drive/1oF8hMaQg1yl2fE6QPUYKSTZcer4Mlk6S?usp=sharing if these parameter is removed script works I am getting following error. ```python /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache) 780 encoder_outputs=encoder_outputs, 781 attention_mask=attention_mask, --> 782 use_cache=use_cache, 783 ) 784 else: /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in _generate_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, early_stopping, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, decoder_start_token_id, eos_token_id, batch_size, num_return_sequences, length_penalty, num_beams, vocab_size, encoder_outputs, attention_mask, use_cache) 1027 input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache 1028 ) -> 1029 outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size) 1030 next_token_logits = outputs[0][:, -1, :] # (batch_size * num_beams, vocab_size) 1031 /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 983 984 with ops.enable_auto_cast_variables(self._compute_dtype_object): --> 985 outputs = call_fn(inputs, *args, **kwargs) 986 987 if self._activity_regularizer: /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py in call(self, inputs, **kwargs) 1061 encoder_attention_mask=attention_mask, 1062 head_mask=head_mask, -> 1063 use_cache=use_cache, 1064 ) 1065 /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 983 984 with ops.enable_auto_cast_variables(self._compute_dtype_object): --> 985 outputs = call_fn(inputs, *args, **kwargs) 986 987 if self._activity_regularizer: /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py in call(self, inputs, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, training) 572 # required mask seq length can be calculated via length of past 573 # key value states and seq_length = 1 for the last token --> 574 mask_seq_length = shape_list(past_key_value_states[0][0])[2] + seq_length 575 else: 576 mask_seq_length = seq_length IndexError: list index out of range ``` ## Expected behavior how to get the attention and hidden states as output? Even if you can share a sample for pytorch I would be able to make it work for TF.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8502/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8501
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8501/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8501/comments
https://api.github.com/repos/huggingface/transformers/issues/8501/events
https://github.com/huggingface/transformers/issues/8501
741,717,309
MDU6SXNzdWU3NDE3MTczMDk=
8,501
Why is the XLM-RoBERTa sometimes producing a standalone start of the word character (the special underscore with ord = 9601)
{ "login": "konstantinmiller", "id": 2629945, "node_id": "MDQ6VXNlcjI2Mjk5NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/2629945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konstantinmiller", "html_url": "https://github.com/konstantinmiller", "followers_url": "https://api.github.com/users/konstantinmiller/followers", "following_url": "https://api.github.com/users/konstantinmiller/following{/other_user}", "gists_url": "https://api.github.com/users/konstantinmiller/gists{/gist_id}", "starred_url": "https://api.github.com/users/konstantinmiller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konstantinmiller/subscriptions", "organizations_url": "https://api.github.com/users/konstantinmiller/orgs", "repos_url": "https://api.github.com/users/konstantinmiller/repos", "events_url": "https://api.github.com/users/konstantinmiller/events{/privacy}", "received_events_url": "https://api.github.com/users/konstantinmiller/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ah, I guess I figured that out. Does it happen when the training data set of the tokenizer never had this token at the beginning of a word but only inside a word?" ]
1,605
1,607
1,607
NONE
null
I'm using the `transformers` library 3.4.0. The XLM-RoBERTa tokenizer produces in certain cases standalone start of the word characters. Is that intended? For example: ``` tokenizer.tokenize('amerikanische') ['▁', 'amerikanische'] ``` while ``` tokenizer.tokenize('englische') ['▁englische'] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8501/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8501/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8500
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8500/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8500/comments
https://api.github.com/repos/huggingface/transformers/issues/8500/events
https://github.com/huggingface/transformers/pull/8500
741,710,834
MDExOlB1bGxSZXF1ZXN0NTE5OTc2NTEw
8,500
Fix doc bug
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the fix!" ]
1,605
1,605
1,605
CONTRIBUTOR
null
Fix the example of Trainer, hope it help. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8500", "html_url": "https://github.com/huggingface/transformers/pull/8500", "diff_url": "https://github.com/huggingface/transformers/pull/8500.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8500.patch", "merged_at": 1605199643000 }
https://api.github.com/repos/huggingface/transformers/issues/8499
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8499/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8499/comments
https://api.github.com/repos/huggingface/transformers/issues/8499/events
https://github.com/huggingface/transformers/issues/8499
741,699,107
MDU6SXNzdWU3NDE2OTkxMDc=
8,499
Unable to install Transformers
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I guess the issue is that you're using `anaconda` here. Until version v4.0.0, we're not entirely compatible with anaconda as SentencePiece is not on a conda channel.\r\n\r\nIn the meantime, we recommend installing `transformers` in a pip virtual env:\r\n\r\n```shell-script\r\npython -m venv .env\r\nsource .env/bin/activate\r\npip install -e .\r\n```", "@LysandreJik Thanks for your answer! Actually it was not an `anaconda` issue.\r\n\r\nI found the solution! There're 2 version issues in the install requirements. \r\nSee below the steps - but I had to reinstall Python back to 3.8 as Torch/TorchVision don't support 3.9 yet.\r\n\r\n1. Copy content of GitHub repo in a “transformers” folder: https://github.com/huggingface/transformers\r\n\r\n2. `cd transformers`\r\n\r\n3. Change all the `tokenizers` 0.9.3 reference to 0.9.4 in transformers files\r\n\r\n4. Change all the `sentencepiece` 0.1.91 reference to 0.1.94 in transformers files\r\n\r\n5. `brew install pkgconfig`\r\n\r\n6. `python3.8 setup.py install`\r\n\r\nAnd voila! I hope it helps lots of folks struggling. 👍 \r\n\r\n", "@MoonshotQuest - what are \"transformers files\" in the reply above?" ]
1,605
1,655
1,605
NONE
null
Hi all - I'm unable to install transformers from source. I need this for a project, it's really annoying not be able to use your amazing work. Could you please help me? :) Thank you so much. **Issue** pip install is blocked at **sentencepiece-0.1.91** install and crashes **What I tried** - I tried to find a workaround by installing the latest version of sentencepiece 0.1.94 but it doesn't solve the issue - I tried to download the repository locally and change the version requirement in setup.py and requirement.txt it doesn't solve neither - My system: MacOS 10.15.7 / Python 3.9.0 / Pip 20.2.4 / Anaconda3 with PyTorch **The error messages and pip list to show you I installed latest sentencepiece** ``` (env) (base) Cecilias-MacBook-Air:transformers mymacos$ pip3 install -e . Obtaining file:///Users/mymacos/Documents/OpenAI/transformers Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting filelock Using cached filelock-3.0.12-py3-none-any.whl (7.6 kB) Collecting sentencepiece==0.1.91 Using cached sentencepiece-0.1.91.tar.gz (500 kB) ERROR: Command errored out with exit status 1: command: /Users/mymacos/Documents/OpenAI/env/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/s7/73dlmpfj3253cpbpl6v96rbm0000gn/T/pip-install-5ceji0j1/sentencepiece/setup.py'"'"'; __file__='"'"'/private/var/folders/s7/73dlmpfj3253cpbpl6v96rbm0000gn/T/pip-install-5ceji0j1/sentencepiece/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/s7/73dlmpfj3253cpbpl6v96rbm0000gn/T/pip-pip-egg-info-svt86xy8 cwd: /private/var/folders/s7/73dlmpfj3253cpbpl6v96rbm0000gn/T/pip-install-5ceji0j1/sentencepiece/ Complete output (5 lines): Package sentencepiece was not found in the pkg-config search path. Perhaps you should add the directory containing `sentencepiece.pc' to the PKG_CONFIG_PATH environment variable No package 'sentencepiece' found Failed to find sentencepiece pkgconfig ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. (env) (base) Cecilias-MacBook-Air:transformers mymacos$ pip list Package Version ----------------- ------- astroid 2.4.2 isort 5.6.4 lazy-object-proxy 1.4.3 mccabe 0.6.1 numpy 1.19.4 pip 20.2.4 pylint 2.6.0 PyYAML 5.3.1 **sentencepiece 0.1.94** setuptools 50.3.2 six 1.15.0 toml 0.10.2 wheel 0.35.1 wrapt 1.12.1 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8499/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8498
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8498/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8498/comments
https://api.github.com/repos/huggingface/transformers/issues/8498/events
https://github.com/huggingface/transformers/pull/8498
741,690,925
MDExOlB1bGxSZXF1ZXN0NTE5OTU5Nzg1
8,498
Model sharing doc
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
COLLABORATOR
null
# What does this PR do? This PR expands the model sharing doc with some instructions specific to colab. Unrelated: some fixes in marian.rst that I thought I had pushed directly to master but had not.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8498/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8498", "html_url": "https://github.com/huggingface/transformers/pull/8498", "diff_url": "https://github.com/huggingface/transformers/pull/8498.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8498.patch", "merged_at": 1605200003000 }
https://api.github.com/repos/huggingface/transformers/issues/8497
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8497/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8497/comments
https://api.github.com/repos/huggingface/transformers/issues/8497/events
https://github.com/huggingface/transformers/issues/8497
741,688,291
MDU6SXNzdWU3NDE2ODgyOTE=
8,497
Error when loading a model cloned without git-lfs is quite cryptic
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Yep, they way I would go about this would be to programmatically check whether the file is text-only (non-binary) and between 100 and 200 bytes. If it is (and we expected a weights file), it's probably a lfs pointer file.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "unstale" ]
1,605
1,631
1,631
MEMBER
null
# 🚀 Error message request If you forget to install git-LFS (e.g. on Google Colab) and you just do: ```python !git clone https://huggingface.co/facebook/bart-base from transformers import AutoModel model = AutoModel.from_pretrained('./bart-base') ``` The cloning seems to work well but the model weights are not downloaded. The error message is then quite cryptic and could probably be tailored to this (probably) common failure case: ``` loading weights file ./bart-large-cnn/pytorch_model.bin --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 950 try: --> 951 state_dict = torch.load(resolved_archive_file, map_location="cpu") 952 except Exception: 4 frames UnpicklingError: invalid load key, 'v'. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 952 except Exception: 953 raise OSError( --> 954 f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' " 955 f"at '{resolved_archive_file}'" 956 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. " OSError: Unable to load weights from pytorch checkpoint file for './bart-large-cnn' at './bart-large-cnn/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ``` git-LFS specification files are pretty simple to parse and typically look like this: ``` version https://git-lfs.github.com/spec/v1 oid sha256:097417381d6c7230bd9e3557456d726de6e83245ec8b24f529f60198a67b203a size 440473133 ``` The first *key* is always `version`: https://github.com/git-lfs/git-lfs/blob/master/docs/spec.md
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8497/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8496
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8496/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8496/comments
https://api.github.com/repos/huggingface/transformers/issues/8496/events
https://github.com/huggingface/transformers/pull/8496
741,672,876
MDExOlB1bGxSZXF1ZXN0NTE5OTQ0NjAw
8,496
Created ModelCard for Hel-ach-en MT model
{ "login": "Pogayo", "id": 39183794, "node_id": "MDQ6VXNlcjM5MTgzNzk0", "avatar_url": "https://avatars.githubusercontent.com/u/39183794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pogayo", "html_url": "https://github.com/Pogayo", "followers_url": "https://api.github.com/users/Pogayo/followers", "following_url": "https://api.github.com/users/Pogayo/following{/other_user}", "gists_url": "https://api.github.com/users/Pogayo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pogayo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pogayo/subscriptions", "organizations_url": "https://api.github.com/users/Pogayo/orgs", "repos_url": "https://api.github.com/users/Pogayo/repos", "events_url": "https://api.github.com/users/Pogayo/events{/privacy}", "received_events_url": "https://api.github.com/users/Pogayo/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "This is really cool @Pogayo, thanks for sharing.\r\n\r\nIf you can, please consider adding sample inputs for the inference widget, either in DefaultWidget.ts (see https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs) or in this model card.\r\n\r\nWill also add Acholi to the list in https://huggingface.co/languages", "I don't know if this is the right place to ask, apologies in advance - I am trying to translate on the model page and getting this error:\r\n![image](https://user-images.githubusercontent.com/39183794/99581734-d4d85c00-29e9-11eb-8323-056d95ec4503.png)\r\nI have not been able to figure out what causes it so if you can guide me, I would really love to see this model accessible for people.\r\n\r\nUnrecognized configuration class for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig.\r\n\r\n", "Did you change the `pipeline_tag` in the meantime ? It's working now:\r\n\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-luo-en?text=Ariyo\r\n\r\nThe error seemed to point it wanted to do text-generation with your model which it can't.", "It is still not working @Narsil. I get a different error now, do you know what might be causing it?\r\n![image](https://user-images.githubusercontent.com/39183794/99583840-cfc8dc00-29ec-11eb-8eb6-d897de192715.png)\r\n\r\nThe model you referenced is a different one- A Luo -English model- This one is Acholi -English" ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8496/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8496", "html_url": "https://github.com/huggingface/transformers/pull/8496", "diff_url": "https://github.com/huggingface/transformers/pull/8496.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8496.patch", "merged_at": 1605728533000 }
https://api.github.com/repos/huggingface/transformers/issues/8495
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8495/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8495/comments
https://api.github.com/repos/huggingface/transformers/issues/8495/events
https://github.com/huggingface/transformers/issues/8495
741,646,503
MDU6SXNzdWU3NDE2NDY1MDM=
8,495
Allow tensorflow tensors as input to Tokenizer
{ "login": "rbrthogan", "id": 9214671, "node_id": "MDQ6VXNlcjkyMTQ2NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/9214671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rbrthogan", "html_url": "https://github.com/rbrthogan", "followers_url": "https://api.github.com/users/rbrthogan/followers", "following_url": "https://api.github.com/users/rbrthogan/following{/other_user}", "gists_url": "https://api.github.com/users/rbrthogan/gists{/gist_id}", "starred_url": "https://api.github.com/users/rbrthogan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rbrthogan/subscriptions", "organizations_url": "https://api.github.com/users/rbrthogan/orgs", "repos_url": "https://api.github.com/users/rbrthogan/repos", "events_url": "https://api.github.com/users/rbrthogan/events{/privacy}", "received_events_url": "https://api.github.com/users/rbrthogan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I believe @jplu has already used TF Serving. Do you know if it's possible to include tokenization in it?", "Hello!\r\n\r\nUnfortunately it is currently not possible to integrate our tokenizer directly inside a model due to some TensorFlow limitations. Nevertheless, there might be a solution by trying to create your own Tokenization layer such as the one the TF team is [working on](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization).", "Thanks for response and for the link.\r\n\r\nYa, it's a shame that there is still no way to use plain python in the signature.\r\n\r\nI'll likely just find a different work around e.g. converting to PyTorch and serving with TorchServe. \r\n\r\nI'll close this for now.", "I found a working soltuion that doesn't require any changes to Tensorflow or Transformers.\r\n\r\nCommenting because I came across this trying to do something similar. I actually think the issue here is not tensorflow but the transformer type checking for the tokenizer call which doesn't allow for the tensorflow objects.\r\n\r\nI made the following implementation which appears to be working and doesn't rely on anything due to tensorflow limitations:\r\n\r\n```python\r\n# NOTE: the specific model here will need to be overwritten because AutoModel doesn't work\r\nclass CustomModel(transformers.TFDistilBertForSequenceClassification):\r\n\r\n def call_tokenizer(self, input):\r\n if type(input) == list:\r\n return self.tokenizer([str(x) for x in input], return_tensors='tf')\r\n \r\n else:\r\n return self.tokenizer(str(input), return_tensors='tf')\r\n \r\n\r\n\r\n @tf.function(input_signature=[tf.TensorSpec(shape=(1, ), dtype=tf.string)])\r\n def serving(self, content: str):\r\n batch = self.call_tokenizer(content)\r\n batch = dict(batch)\r\n batch = [batch]\r\n output = self.call(batch)\r\n return self.serving_output(output)\r\n\r\n\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(\r\n model_path,\r\n use_fast=True\r\n)\r\n\r\nconfig = transformers.AutoConfig.from_pretrained(\r\n model_path,\r\n num_labels=2,\r\n from_pt=True\r\n)\r\n\r\nmodel = CustomModel.from_pretrained(\r\n model_path,\r\n config=config,\r\n from_pt=True\r\n)\r\n\r\nmodel.tokenizer = tokenizer\r\nmodel.id2label = config.id2label\r\nmodel.save_pretrained(\"model\", saved_model=True)\r\n```", "Hi @maxzzze \r\n\r\nI was also working on including hf tokenizer into tf model. However, I found that inside call_tokenizer, the results tokenizer return would always be the same despites the text input you passed in. \r\n\r\nHave you also encounter such issue? I am thinking save_pretrained wasn't including the tokenizer appropriately.\r\n\r\n> I found a working soltuion that doesn't require any changes to Tensorflow or Transformers.\r\n> \r\n> Commenting because I came across this trying to do something similar. I actually think the issue here is not tensorflow but the transformer type checking for the tokenizer call which doesn't allow for the tensorflow objects.\r\n> \r\n> I made the following implementation which appears to be working and doesn't rely on anything due to tensorflow limitations:\r\n> \r\n> ```python\r\n> # NOTE: the specific model here will need to be overwritten because AutoModel doesn't work\r\n> class CustomModel(transformers.TFDistilBertForSequenceClassification):\r\n> \r\n> def call_tokenizer(self, input):\r\n> if type(input) == list:\r\n> return self.tokenizer([str(x) for x in input], return_tensors='tf')\r\n> \r\n> else:\r\n> return self.tokenizer(str(input), return_tensors='tf')\r\n> \r\n> \r\n> \r\n> @tf.function(input_signature=[tf.TensorSpec(shape=(1, ), dtype=tf.string)])\r\n> def serving(self, content: str):\r\n> batch = self.call_tokenizer(content)\r\n> batch = dict(batch)\r\n> batch = [batch]\r\n> output = self.call(batch)\r\n> return self.serving_output(output)\r\n> \r\n> \r\n> tokenizer = transformers.AutoTokenizer.from_pretrained(\r\n> model_path,\r\n> use_fast=True\r\n> )\r\n> \r\n> config = transformers.AutoConfig.from_pretrained(\r\n> model_path,\r\n> num_labels=2,\r\n> from_pt=True\r\n> )\r\n> \r\n> model = CustomModel.from_pretrained(\r\n> model_path,\r\n> config=config,\r\n> from_pt=True\r\n> )\r\n> \r\n> model.tokenizer = tokenizer\r\n> model.id2label = config.id2label\r\n> model.save_pretrained(\"model\", saved_model=True)\r\n> ```\r\n\r\n" ]
1,605
1,631
1,605
NONE
null
Firstly thanks so much for all the amazing work! I'm trying to package a model for use in TF Serving. The problem is that everywhere I see this done, the tokenisation step happens outside of the server. I want to include this step inside the server so the user can just provide raw text as the input and not need to know anything about tokenization. Here's how I'm trying to do it ``` def save_model(model, tokenizer, output_path): @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)]) def serving(input_text): inputs = tokenizer(input_text, padding='longest', truncation=True, return_tensors="tf") outputs = model(inputs) logits = outputs[0] probs = tf.nn.softmax(logits, axis=1).numpy()[:, 1] predictions = tf.cast(tf.math.round(probs), tf.int32) return { 'classes': predictions, 'probabilities': probs } print(f'Exporting model for TF Serving in {tf_serving_output}') tf.saved_model.save(model, export_dir=output_path, signatures=serving) ``` where e.g. ``` model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2', num_labels=num_classes) tokenizer`= = AutoTokenizer.from_pretrained('albert-base-v2') ``` The problem is that the tokenization step results in ``` AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). ``` clearly it wants plain python strings, not tensorflow tensors. Would appreciate any help, workarounds, or ideally of course, this to be supported. ----- Running: transformers==3.4.0 tensorflow==2.3.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8495/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8495/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8494
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8494/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8494/comments
https://api.github.com/repos/huggingface/transformers/issues/8494/events
https://github.com/huggingface/transformers/issues/8494
741,552,250
MDU6SXNzdWU3NDE1NTIyNTA=
8,494
error occurs when trainning transformer-xl by ddp
{ "login": "ismymajia", "id": 17922949, "node_id": "MDQ6VXNlcjE3OTIyOTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17922949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ismymajia", "html_url": "https://github.com/ismymajia", "followers_url": "https://api.github.com/users/ismymajia/followers", "following_url": "https://api.github.com/users/ismymajia/following{/other_user}", "gists_url": "https://api.github.com/users/ismymajia/gists{/gist_id}", "starred_url": "https://api.github.com/users/ismymajia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ismymajia/subscriptions", "organizations_url": "https://api.github.com/users/ismymajia/orgs", "repos_url": "https://api.github.com/users/ismymajia/repos", "events_url": "https://api.github.com/users/ismymajia/events{/privacy}", "received_events_url": "https://api.github.com/users/ismymajia/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
my env is as below: - `transformers` version: 3.4.0 - Platform: 1Ubuntu-18.04 - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> I am trainning the transformer-xl on one machine with multi-gpus by ddp. my script is as below: python -m torch.distributed.launch --nproc_per_node 4 run_language_modeling.py --output_dir ${model_dir} --tokenizer_name $data_dir/wordpiece-custom.json --config_name $data_dir/$config_file --train_data_files "$data_dir/train*.txt" --eval_data_file $data_dir/valid.txt --block_size=128 --do_train --do_eval --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --learning_rate 6e-4 --weight_decay 0.01 --adam_epsilon 1e-6 --adam_beta1 0.9 --adam_beta2 0.98 --max_steps 500_000 --warmup_steps 24_000 --fp16 --logging_dir ${model_dir}/tensorboard --save_steps 5000 --save_total_limit 20 --seed 108 --max_steps -1 --num_train_epochs 20 --dataloader_num_workers 0 --overwrite_output_dir occur error: [INFO|language_modeling.py:242] 2020-11-11 11:54:46,363 >> Loading features from cached file /opt/ml/input/data/training/kyzhan/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train3.txt [took 116.431 s] / th_index_copy main() File "run_hf_train_lm_ti.py", line 338, in main trainer.train(model_path=model_path) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 758, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1056, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1082, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 511, in forward output = self.module(*inputs[0], **kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 1056, in forward return_dict=return_dict, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 888, in forward word_emb = self.word_emb(input_ids) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 448, in forward emb_flat.index_copy(0, indices_i, emb_i) RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #4 'source' in call to th_index_copy @TevenLeScao
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8494/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8493
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8493/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8493/comments
https://api.github.com/repos/huggingface/transformers/issues/8493/events
https://github.com/huggingface/transformers/issues/8493
741,521,290
MDU6SXNzdWU3NDE1MjEyOTA=
8,493
I meet the zero gradient descent
{ "login": "Sniper970119", "id": 30463691, "node_id": "MDQ6VXNlcjMwNDYzNjkx", "avatar_url": "https://avatars.githubusercontent.com/u/30463691?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sniper970119", "html_url": "https://github.com/Sniper970119", "followers_url": "https://api.github.com/users/Sniper970119/followers", "following_url": "https://api.github.com/users/Sniper970119/following{/other_user}", "gists_url": "https://api.github.com/users/Sniper970119/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sniper970119/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sniper970119/subscriptions", "organizations_url": "https://api.github.com/users/Sniper970119/orgs", "repos_url": "https://api.github.com/users/Sniper970119/repos", "events_url": "https://api.github.com/users/Sniper970119/events{/privacy}", "received_events_url": "https://api.github.com/users/Sniper970119/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Sniper970119 do you mind posting this on the forum rather? It's here: https://discuss.huggingface.co\r\n\r\nWe are trying to focus the issues on bug reports and features/model requests.\r\n\r\nThanks a lot.", "> \r\n> \r\n> Hi @Sniper970119 do you mind posting this on the forum rather? It's here: https://discuss.huggingface.co\r\n> \r\n> We are trying to focus the issues on bug reports and features/model requests.\r\n> \r\n> Thanks a lot.\r\n\r\nok,I just post it on the forum.Thank for your reply." ]
1,605
1,605
1,605
CONTRIBUTOR
null
I want use transformers to do text classification, I want code myself rather than use `TFBertForSequenceClassification`,so I write the model with `TFBertModel` and `tf.keras.laters.Dense`,but this is no gradient descent in my code, I try to find what wrong with my code but I can't. So I submit this issues to ask for some help. my code is here: Model: ![图片](https://user-images.githubusercontent.com/30463691/98935026-e15a4180-251d-11eb-87a5-0d23bcb7cf4a.png) ![图片](https://user-images.githubusercontent.com/30463691/98935038-e4edc880-251d-11eb-9c3d-ee32df987590.png) and I know train data is test data,just for quick debug. and when I train this model , ![图片](https://user-images.githubusercontent.com/30463691/98934626-5ed18200-251d-11eb-9eb3-c0330a6398bf.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8493/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8492
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8492/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8492/comments
https://api.github.com/repos/huggingface/transformers/issues/8492/events
https://github.com/huggingface/transformers/pull/8492
741,511,540
MDExOlB1bGxSZXF1ZXN0NTE5ODExMzc4
8,492
Rework some TF tests
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? Rework some TF tests to make them compliant with dict returns, and simplify some of them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8492/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8492", "html_url": "https://github.com/huggingface/transformers/pull/8492", "diff_url": "https://github.com/huggingface/transformers/pull/8492.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8492.patch", "merged_at": 1605305238000 }
https://api.github.com/repos/huggingface/transformers/issues/8491
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8491/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8491/comments
https://api.github.com/repos/huggingface/transformers/issues/8491/events
https://github.com/huggingface/transformers/pull/8491
741,491,150
MDExOlB1bGxSZXF1ZXN0NTE5Nzk0MzQy
8,491
Fix check scripts for Windows
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It doesn't make any change on Linux, and you've tested it on Windows. Could we get someone using MacOS to double-check it doesn't break anything for them before merging?", "I think @LysandreJik is on MacOS?", "I'm actually between Manjaro and EndeavourOS, but I'll check on a Mac." ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? The current check-X scripts are reading/writing with `os.linesep` as the newline separator. On Windows it makes the overwritten files in CRLF instead of LF. Same logic is applied on Mac with CR. Now, Python will always use LF to read and write in the files.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8491/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8491", "html_url": "https://github.com/huggingface/transformers/pull/8491", "diff_url": "https://github.com/huggingface/transformers/pull/8491.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8491.patch", "merged_at": 1605207161000 }
https://api.github.com/repos/huggingface/transformers/issues/8490
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8490/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8490/comments
https://api.github.com/repos/huggingface/transformers/issues/8490/events
https://github.com/huggingface/transformers/pull/8490
741,473,603
MDExOlB1bGxSZXF1ZXN0NTE5Nzc5NzUz
8,490
New TF loading weights
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have added a lot of comments in the method to make it clearer, I removed a small part of the code that was due to the moment where I was updating to the new names in same time. @LysandreJik @sgugger it should be easier to understand now.", "It's a lot clearer, thanks. There are still unaddressed comments however, and I can't comment on line 259 but it should be removed now (since the dict is create two lines below).", "What is missing now?", "There is Lysandre's comments at line 283 and mine about the loop line 277. Like I said in my previous comments, doing the two functions in one is great, I just don't get the added complexity of the new `model_layers_name_value` variable when we could stick to the previous loop in the function `load_tf_weights` while adding the behavior of `detect_tf_missing_unexpected_layers`.\r\n\r\nThe comments are a great addition, thanks a lot for adding those!", "I have addressed the Lysandre's comment at line 283 and yours for the loop at line 277. Do you see anything else?", "The typos should be fixed now. Sorry for that.", "Good to merge for me too!" ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR improves the way we load the TensorFlow weights. Before we had to go through the instantiated model + the checkpoint twice: - once for loading the weights from the checkpoints into the instantiated model - once for computing the missing and unexpected keys Now both are done simultaneously which makes the loading faster.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8490/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8490", "html_url": "https://github.com/huggingface/transformers/pull/8490", "diff_url": "https://github.com/huggingface/transformers/pull/8490.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8490.patch", "merged_at": 1605714512000 }
https://api.github.com/repos/huggingface/transformers/issues/8489
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8489/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8489/comments
https://api.github.com/repos/huggingface/transformers/issues/8489/events
https://github.com/huggingface/transformers/pull/8489
741,467,565
MDExOlB1bGxSZXF1ZXN0NTE5Nzc0ODM1
8,489
Fix typo in roberta-base-squad2-v2 model card
{ "login": "antoniolanza1996", "id": 40452030, "node_id": "MDQ6VXNlcjQwNDUyMDMw", "avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antoniolanza1996", "html_url": "https://github.com/antoniolanza1996", "followers_url": "https://api.github.com/users/antoniolanza1996/followers", "following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}", "gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions", "organizations_url": "https://api.github.com/users/antoniolanza1996/orgs", "repos_url": "https://api.github.com/users/antoniolanza1996/repos", "events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}", "received_events_url": "https://api.github.com/users/antoniolanza1996/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? Simply adding `-v2` for Haystack API model loading. Furthermore, I've also changed `model` in `model_name_or_path` due to breaking change in Haystack (https://github.com/deepset-ai/haystack/pull/510). ## Who can review? Model Cards: @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8489/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8489", "html_url": "https://github.com/huggingface/transformers/pull/8489", "diff_url": "https://github.com/huggingface/transformers/pull/8489.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8489.patch", "merged_at": 1605176977000 }
https://api.github.com/repos/huggingface/transformers/issues/8488
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8488/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8488/comments
https://api.github.com/repos/huggingface/transformers/issues/8488/events
https://github.com/huggingface/transformers/pull/8488
741,445,821
MDExOlB1bGxSZXF1ZXN0NTE5NzU2OTcx
8,488
[WIP] T5v1.1 & MT5
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe wrong model config for T5.1.1. For instance, T5.1.1.small should have num_layers=8 and num_heads=6.\r\n\r\nSee https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/gin/models/t5.1.1.small.gin", "> Maybe wrong model config for T5.1.1. For instance, T5.1.1.small should have num_layers=8 and num_heads=6.\r\n> \r\n> See https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/gin/models/t5.1.1.small.gin\r\n\r\nThanks yeah, I implemented that. \r\n\r\nThe new model structure is now equal to mesh t5 v1.1. \r\n\r\nIf you download the t5v1.1 `t5-small` checkpoint and replace the corresponding path in `check_t5_against_hf.py` you can see that the models are equal. \r\n\r\nThere is still quite some work to do: write more tests, lots of cleaning and better design, and check if mT5 works with it.", "> If you download the t5v1.1 `t5-small` checkpoint and replace the corresponding path in `check_t5_against_hf.py` you can see that the models are equal.\r\n\r\nHi, `check_t5_against_hf.py` still fails if I use a longer input text instead of `Hello there`, like `Hello there. Let's put more words in more languages than I originally thought.`\r\n", "> > If you download the t5v1.1 `t5-small` checkpoint and replace the corresponding path in `check_t5_against_hf.py` you can see that the models are equal.\r\n> \r\n> Hi, `check_t5_against_hf.py` still fails if I use a longer input text instead of `Hello there`, like `Hello there. Let's put more words in more languages than I originally thought.`\r\n\r\nHmm, it works for me - do you experience that for T5v1.1 or mT5?", "> > > If you download the t5v1.1 `t5-small` checkpoint and replace the corresponding path in `check_t5_against_hf.py` you can see that the models are equal.\r\n> > \r\n> > \r\n> > Hi, `check_t5_against_hf.py` still fails if I use a longer input text instead of `Hello there`, like `Hello there. Let's put more words in more languages than I originally thought.`\r\n> \r\n> Hmm, it works for me - do you experience that for T5v1.1 or mT5?\r\n\r\nAha, the checking is OK now. Yesterday I made a mistake that when I changed the test input sentence in the check script, I didn't update the input length for MTF model from 4 to a longer value like 128. So actually the MTF model and PyTorch model received different inputs, and of course got different results.\r\n\r\nBesides, if I add the z-loss to the CE loss at last, it differs from MTF score again. I just found MTF ignores z-loss when not training ([code](https://github.com/tensorflow/mesh/blob/4f82ba1275e4c335348019fee7974d11ac0c9649/mesh_tensorflow/transformer/transformer.py#L781)). So I think MTF model score does not include z-loss, but its training does, which is absent from HF T5 training. Well, this is absolutely not a blocking issue now.\r\n\r\nAppreciate your great work :) ", "closing in favor of https://github.com/huggingface/transformers/pull/8552." ]
1,605
1,605
1,605
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8488/reactions", "total_count": 13, "+1": 0, "-1": 0, "laugh": 0, "hooray": 5, "confused": 0, "heart": 4, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8488/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8488", "html_url": "https://github.com/huggingface/transformers/pull/8488", "diff_url": "https://github.com/huggingface/transformers/pull/8488.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8488.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8487
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8487/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8487/comments
https://api.github.com/repos/huggingface/transformers/issues/8487/events
https://github.com/huggingface/transformers/issues/8487
741,432,522
MDU6SXNzdWU3NDE0MzI1MjI=
8,487
`log_history` does not contain metrics anymore
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`evaluate` calls `log` which appends the results to `log_history`. So the code is there. Without a reproducer to investigate, there is nothing we can do to help.", "Here is the demo code that shows the bug: https://colab.research.google.com/drive/1dEzkDoMampL-VVrQeO924HQmHXffya0Z?usp=sharing\r\n\r\nThe last line should print all metrics and does that with version 3.4.0 but not with 3.5.0\r\n\r\nOutput of 3.4.0 (which is correct):\r\n```\r\n{'eval_loss': 0.5401068925857544, 'eval_f1_OTHER': 0.8642232403165347, 'eval_f1_OFFENSE': 0.6730190571715146, 'eval_recall_OTHER': 0.9230427046263345, 'eval_recall_OFFENSE': 0.5834782608695652, 'eval_acc': 0.8081224249558564, 'eval_bac': 0.7532604827479499, 'eval_mcc': 0.5547059570919702, 'eval_f1_macro': 0.7686211487440247, 'epoch': 2.0, 'total_flos': 668448673730400, 'step': 628}\r\n```\r\n\r\n\r\nBug in 3.5.0:\r\n```\r\n{'total_flos': 668448673730400, 'epoch': 2.0, 'step': 628}\r\n```", "Ah it's not a bug. In 3.5.0 there is one final log entry for the total_flos (instead of logging them during training as it's only useful at the end). So you can still access all your metrics but with the second-to-last entry (`trainer.state.log_history[-2]`).", "Hi @sgugger ok thanks for the info.\r\nIt might be no bug but honestly. This logging \"API\" is very fragile. What happens if I do a `[-2]` now and in the next release the final log entry for the total_flos is moved to an other list. Then I am getting the result of the 2nd last epoch instead of the last one.\r\n\r\nIMO this logging \"API\" needs a clean and better redesign. Or do I just use it in a wrong way?", "There are plenty of things that could log more info: a callback, some other tweak in training performed at the end. IMO you shouldn't rely on a hard-coded index but loop from the end of the `log_history` until you find a dict with the metric values.", "Ok. Closing this." ]
1,605
1,605
1,605
CONTRIBUTOR
null
Since version 3.5.0 the `log_history` of the trainer does not contain the metrics anymore. Version 3.4.0 works... My trainer uses a `compute_metrics` callback. It avaluates after each epoch. At version 3.4.0 after the training I am extracting the last epoch results: `trainer.state.log_history[-1]` to log the metrics. At version 3.5.0 the dict only contains loss and epoch number but not the computed metrics. I think anything was changed that broke the metric logging. I can not provide example code. Sorry...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8487/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8486
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8486/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8486/comments
https://api.github.com/repos/huggingface/transformers/issues/8486/events
https://github.com/huggingface/transformers/issues/8486
741,390,993
MDU6SXNzdWU3NDEzOTA5OTM=
8,486
Gradient accumulation averages over gradients
{ "login": "MarktHart", "id": 9414924, "node_id": "MDQ6VXNlcjk0MTQ5MjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9414924?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarktHart", "html_url": "https://github.com/MarktHart", "followers_url": "https://api.github.com/users/MarktHart/followers", "following_url": "https://api.github.com/users/MarktHart/following{/other_user}", "gists_url": "https://api.github.com/users/MarktHart/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarktHart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarktHart/subscriptions", "organizations_url": "https://api.github.com/users/MarktHart/orgs", "repos_url": "https://api.github.com/users/MarktHart/repos", "events_url": "https://api.github.com/users/MarktHart/events{/privacy}", "received_events_url": "https://api.github.com/users/MarktHart/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @MarktHart do you mind posting this on the forum rather? It's here: https://discuss.huggingface.co\r\n\r\nWe are trying to focus the issues on bug reports and features/model requests.\r\n\r\nThanks a lot.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
https://github.com/huggingface/transformers/blob/121c24efa4453e4e726b5f0b2cf7095b14b7e74e/src/transformers/trainer.py#L1118 So I have been looking at this for the past day and a half. Please explain to me. Gradient accumulation should accumulate the gradient, not average it, right? That makes this scaling plain wrong? Am I missing something?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8486/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8485
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8485/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8485/comments
https://api.github.com/repos/huggingface/transformers/issues/8485/events
https://github.com/huggingface/transformers/pull/8485
741,300,497
MDExOlB1bGxSZXF1ZXN0NTE5NjM3ODMz
8,485
Prediction loop: work with batches of variable length (fixed per batch)
{ "login": "JulesGM", "id": 3231217, "node_id": "MDQ6VXNlcjMyMzEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesGM", "html_url": "https://github.com/JulesGM", "followers_url": "https://api.github.com/users/JulesGM/followers", "following_url": "https://api.github.com/users/JulesGM/following{/other_user}", "gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions", "organizations_url": "https://api.github.com/users/JulesGM/orgs", "repos_url": "https://api.github.com/users/JulesGM/repos", "events_url": "https://api.github.com/users/JulesGM/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesGM/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please feel free to modify this in any shape or form of course.", "This will cause a regression for code expecting a NumPy output, obviously :/", "I don't have much time, if any, to dedicate to this. If you're not interested by the idea, it's completely fine, I will close.", "I think @jplu is redesigning the TFTrainer. Maybe this should be reopened once that design has been merged in master?", "This won't be compliant anymore because the redisign doesn't use custom loops.", "Do you support different batch lengths in the new one? @jplu ", "It is not on top of the list but, yes for sure, we plan to support it, including for training." ]
1,605
1,605
1,605
NONE
null
In the current form, the `prediction_loop` doesn't handle batches with samples of varying lengths (but fixed length per batch). This patch adds this capability. This is great because it can save a lot of time during training and inference, where using the full length every time is a big sacrifice, knowing self-attention scales in n**2. Disclaimer: This is a strictly personal contribution, not linked to my professional affiliation in any way. @sgugger https://github.com/huggingface/transformers/issues/8483
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8485/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8485", "html_url": "https://github.com/huggingface/transformers/pull/8485", "diff_url": "https://github.com/huggingface/transformers/pull/8485.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8485.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8484
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8484/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8484/comments
https://api.github.com/repos/huggingface/transformers/issues/8484/events
https://github.com/huggingface/transformers/issues/8484
741,286,414
MDU6SXNzdWU3NDEyODY0MTQ=
8,484
automodel
{ "login": "RochelleChoenni", "id": 32510841, "node_id": "MDQ6VXNlcjMyNTEwODQx", "avatar_url": "https://avatars.githubusercontent.com/u/32510841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RochelleChoenni", "html_url": "https://github.com/RochelleChoenni", "followers_url": "https://api.github.com/users/RochelleChoenni/followers", "following_url": "https://api.github.com/users/RochelleChoenni/following{/other_user}", "gists_url": "https://api.github.com/users/RochelleChoenni/gists{/gist_id}", "starred_url": "https://api.github.com/users/RochelleChoenni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RochelleChoenni/subscriptions", "organizations_url": "https://api.github.com/users/RochelleChoenni/orgs", "repos_url": "https://api.github.com/users/RochelleChoenni/repos", "events_url": "https://api.github.com/users/RochelleChoenni/events{/privacy}", "received_events_url": "https://api.github.com/users/RochelleChoenni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8484/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8483
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8483/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8483/comments
https://api.github.com/repos/huggingface/transformers/issues/8483/events
https://github.com/huggingface/transformers/issues/8483
741,285,771
MDU6SXNzdWU3NDEyODU3NzE=
8,483
transformers.TFTrainer: Does not support batches with sequences of variable lengths?
{ "login": "JulesGM", "id": 3231217, "node_id": "MDQ6VXNlcjMyMzEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesGM", "html_url": "https://github.com/JulesGM", "followers_url": "https://api.github.com/users/JulesGM/followers", "following_url": "https://api.github.com/users/JulesGM/following{/other_user}", "gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions", "organizations_url": "https://api.github.com/users/JulesGM/orgs", "repos_url": "https://api.github.com/users/JulesGM/repos", "events_url": "https://api.github.com/users/JulesGM/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesGM/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Created pull request https://github.com/huggingface/transformers/pull/8485", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hello @JulesGM \r\nRegarding [#8483](https://github.com/huggingface/transformers/pull/8485)\r\nI went through the code and I found this helpful. But I'm facing issues to convert my training tf.data.Dataset to tf.RaggedTensor format. if possible can you share resources regarding this?" ]
1,605
1,620
1,611
NONE
null
Hello, It seems like `np.append` in `TFTrainer.prediction_loop` is the only thing that prevent TFTrainer from being able to deal with batches of variable sequence length (between the batches, not inside the batches themselves). Indeed, `np.append` requires the batches to be of the same sequence length. Alternatives: as this is in tensorflow, an easy alternative would be to convert the batches to `tf.RaggedTensor` with `tf.ragged.constant`, and to concatenate them (the usual way) with `tf.concat`. You could also ofc just make `preds` and `label_ids` into lists. There doesn't seem to be any big computation going on on these objects.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8483/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8482
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8482/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8482/comments
https://api.github.com/repos/huggingface/transformers/issues/8482/events
https://github.com/huggingface/transformers/pull/8482
741,261,812
MDExOlB1bGxSZXF1ZXN0NTE5NjA2MDAz
8,482
TAPAS tokenizer & tokenizer tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you! \r\n\r\n❗ This is a preliminary review, I'm not finished with it. 2 important things for now:\r\n\r\n1) I am also testing the Colab demo's with this branch. Currently I'm getting an error when providing `answer_coordinates` and `answer_texts` to the tokenizer: \r\n\r\nSQA: https://colab.research.google.com/drive/1BNxrKkrwpWuE2TthZL5qQlERtcK4ZbIt?usp=sharing\r\nWTQ: https://colab.research.google.com/drive/1K8ZeNQyBqo-A03D8RL8_j34n-Ubggb9U?usp=sharing\r\n\r\nNormally, the `label_ids`, `numeric_values` and `numeric_values_scale` should also be padded when I set padding='max_length'. \r\n\r\n2) I've got an updated version of the creation of the numeric values (they are currently not performed correctly) in a branch named `tapas_v3_up_to_date_with_master`. Either you could incorporate these changes in your branch before making a PR, or I make them after the PR is merged (what you like best - the latter is probably easier). ", "Great, thanks for your great preliminary review. I've fixed a few of the issues, just pushed a commit. There's a few things you mention that definitely need a deeper look. I can do so in the coming days, but I'll let you finish your review first so that I may batch everything. Thank you!", "@LysandreJik I have finished reviewing, I've added more (mostly documentation-related) comments.\r\n\r\nThe most important thing is that when `label_ids`, `answer_coordinates` and `answer_text` are provided to the tokenizer, an error is currently thrown due to the fact that padding is not working. \r\n\r\nBesides this, the other important things are:\r\n* a correct implementation of the creation of the `prev_label_ids` when a batch of table-question pairs is provided\r\n* a correct implementation of `drop_rows_to_fit` and `cell_trim_length`" ]
1,605
1,651
1,605
MEMBER
null
This PR aims to implement the tokenizer API for the TAPAS model, as well as the tests. It is based on `tapas-style` which contains all the changes done by black & isort on top of the `nielsrogge/tapas_v3` branch in https://github.com/huggingface/transformers/pull/8113. The API is akin to our other tokenizers': it is based on the `__call__` method which dispatches to `encode_plus` or `batch_encode_plus` according to the inputs. These two methods then dispatch to `_encode_plus` and `_batch_encode_plus`, which themselves dispatch to `prepare_for_model` and `_batch_prepare_for_model`. Here are the remaining tasks for the tokenizers, from what I could observe: - Two tokenizer tests are failing. This is only due to the fact that there is no checkpoint currently available. - The truncation is *not* the same as it was before these changes. Before these changes, if a row of the dataframe was to be truncated, the whole row was removed. Right now only the overflowing tokens will be removed. This is probably an important change that will need to be reverted (implemented in the new API). - The tokenizer is based on `pd.DataFrame`s. It should be very simple to switch from these to `datasets.Dataset`, which serve the same purpose. Once this PR is merged, I'll open a PR from `tapas-style` to `nielsrogge/tapas_v3` as explained in https://github.com/huggingface/transformers/pull/8113#issuecomment-725818087
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8482", "html_url": "https://github.com/huggingface/transformers/pull/8482", "diff_url": "https://github.com/huggingface/transformers/pull/8482.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8482.patch", "merged_at": 1605544240000 }
https://api.github.com/repos/huggingface/transformers/issues/8481
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8481/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8481/comments
https://api.github.com/repos/huggingface/transformers/issues/8481/events
https://github.com/huggingface/transformers/pull/8481
741,259,270
MDExOlB1bGxSZXF1ZXN0NTE5NjA0MDAw
8,481
TAPAS Tokenizer & tokenizer tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8481/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8481", "html_url": "https://github.com/huggingface/transformers/pull/8481", "diff_url": "https://github.com/huggingface/transformers/pull/8481.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8481.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8480
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8480/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8480/comments
https://api.github.com/repos/huggingface/transformers/issues/8480/events
https://github.com/huggingface/transformers/issues/8480
741,208,845
MDU6SXNzdWU3NDEyMDg4NDU=
8,480
Error when upload models: "LFS: Client error"
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Since `git push` by itself is not so informative, I retried it with more verbose output (sorry for the long output): \r\n```\r\n$ GIT_TRACE=1 GIT_CURL_VERBOSE=1 git push\r\n18:07:20.246678 git.c:440 trace: built-in: git push\r\n18:07:20.247908 run-command.c:663 trace: run_command: GIT_DIR=.git git-remote-https origin https://huggingface.co/allenai/unifiedqa-t5-3b\r\n* Couldn't find host huggingface.co in the .netrc file; using defaults\r\n* Trying 192.99.39.165...\r\n* TCP_NODELAY set\r\n* Connected to huggingface.co (192.99.39.165) port 443 (#0)\r\n* ALPN, offering h2\r\n* ALPN, offering http/1.1\r\n* successfully set certificate verify locations:\r\n* CAfile: /etc/ssl/cert.pem\r\n CApath: none\r\n* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384\r\n* ALPN, server accepted to use http/1.1\r\n* Server certificate:\r\n* subject: CN=huggingface.co\r\n* start date: Nov 10 08:05:46 2020 GMT\r\n* expire date: Feb 8 08:05:46 2021 GMT\r\n* subjectAltName: host \"huggingface.co\" matched cert's \"huggingface.co\"\r\n* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3\r\n* SSL certificate verify ok.\r\n> GET /allenai/unifiedqa-t5-3b/info/refs?service=git-receive-pack HTTP/1.1\r\nHost: huggingface.co\r\nUser-Agent: git/2.23.0\r\nAccept: */*\r\nAccept-Encoding: deflate, gzip\r\nAccept-Language: en-US, *;q=0.9\r\nPragma: no-cache\r\n\r\n< HTTP/1.1 401 Unauthorized\r\n< Server: nginx/1.14.2\r\n< Date: Thu, 12 Nov 2020 02:07:21 GMT\r\n< Content-Type: text/plain; charset=utf-8\r\n< Content-Length: 12\r\n< Connection: keep-alive\r\n< X-Powered-By: huggingface-moon\r\n< WWW-Authenticate: Basic realm=\"Authentication required\", charset=\"UTF-8\"\r\n< ETag: W/\"c-dAuDFQrdjS3hezqxDTNgW7AOlYk\"\r\n< \r\n* Connection #0 to host huggingface.co left intact\r\n18:07:20.859025 run-command.c:663 trace: run_command: 'git credential-osxkeychain get'\r\n18:07:20.876285 git.c:703 trace: exec: git-credential-osxkeychain get\r\n18:07:20.877309 run-command.c:663 trace: run_command: git-credential-osxkeychain get\r\n* Found bundle for host huggingface.co: 0x7f89a65048d0 [can pipeline]\r\n* Could pipeline, but not asked to!\r\n* Re-using existing connection! (#0) with host huggingface.co\r\n* Connected to huggingface.co (192.99.39.165) port 443 (#0)\r\n* Server auth using Basic with user 'danyaljj'\r\n> GET /allenai/unifiedqa-t5-3b/info/refs?service=git-receive-pack HTTP/1.1\r\nHost: huggingface.co\r\nAuthorization: Basic ZGFueWFsamo6UmVuZGNyYXp5MQ==\r\nUser-Agent: git/2.23.0\r\nAccept: */*\r\nAccept-Encoding: deflate, gzip\r\nAccept-Language: en-US, *;q=0.9\r\nPragma: no-cache\r\n\r\n< HTTP/1.1 200 OK\r\n< Server: nginx/1.14.2\r\n< Date: Thu, 12 Nov 2020 02:07:21 GMT\r\n< Content-Type: application/x-git-receive-pack-advertisement\r\n< Transfer-Encoding: chunked\r\n< Connection: keep-alive\r\n< X-Powered-By: huggingface-moon\r\n< \r\n* Connection #0 to host huggingface.co left intact\r\n18:07:21.159144 run-command.c:663 trace: run_command: 'git credential-osxkeychain store'\r\n18:07:21.175886 git.c:703 trace: exec: git-credential-osxkeychain store\r\n18:07:21.176867 run-command.c:663 trace: run_command: git-credential-osxkeychain store\r\n18:07:21.240597 run-command.c:663 trace: run_command: .git/hooks/pre-push origin https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:21.258423 git.c:703 trace: exec: git-lfs pre-push origin https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:21.259618 run-command.c:663 trace: run_command: git-lfs pre-push origin https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:21.280328 trace git-lfs: exec: git 'version'\r\n18:07:21.305769 trace git-lfs: exec: git '-c' 'filter.lfs.smudge=' '-c' 'filter.lfs.clean=' '-c' 'filter.lfs.process=' '-c' 'filter.lfs.required=false' 'rev-parse' 'HEAD' '--symbolic-full-name' 'HEAD'\r\n18:07:21.330148 trace git-lfs: exec: git 'config' '-l'\r\n18:07:21.341551 trace git-lfs: pre-push: refs/heads/main 820bb7e936e2e5665ea9c4ac3016456b3ce55bc7 refs/heads/main 4d2dae1e804fc041975dc40c06e3ab902b6c3f38\r\n18:07:21.829857 trace git-lfs: tq: running as batched queue, batch size of 100\r\n18:07:21.830328 trace git-lfs: run_command: git rev-list --stdin --objects --not --remotes=origin --\r\n18:07:21.848139 trace git-lfs: tq: sending batch of size 1 \r\n18:07:21.848726 trace git-lfs: api: batch 1 files\r\n18:07:21.848996 trace git-lfs: creds: git credential fill (\"https\", \"huggingface.co\", \"\")\r\n18:07:21.859568 git.c:440 trace: built-in: git credential fill\r\n18:07:21.861149 run-command.c:663 trace: run_command: 'git credential-osxkeychain get'\r\n18:07:21.877936 git.c:703 trace: exec: git-credential-osxkeychain get\r\n18:07:21.879004 run-command.c:663 trace: run_command: git-credential-osxkeychain get\r\n18:07:21.920056 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:21.989068 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:07:23.102074 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:07:23 GMT\r\n< Etag: W/\"242-LFg/omWZFm9SxeMWd5EiIfG1JTM\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:07:23.102239 trace git-lfs: creds: git credential approve (\"https\", \"huggingface.co\", \"\")\r\n18:07:23.112995 git.c:440 trace: built-in: git credential approve\r\n18:07:23.114213 run-command.c:663 trace: run_command: 'git credential-osxkeychain store'\r\n18:07:23.129607 git.c:703 trace: exec: git-credential-osxkeychain store\r\n18:07:23.130582 run-command.c:663 trace: run_command: git-credential-osxkeychain store\r\n18:07:23.195094 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020723Z&X-Amz-Expires=900&X-Amz-Signature=2d3c1762e44b21f78c89a7c5a5f41\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020723Z&X-Amz-Expires=900&X-Amz-Signature=2d3c1762e44b21f78c89a7c5a5f4118:07:23.195387 trace git-lfs: HTTP: 655333c901abeb466effd6f1d61bd110f6a&X-Amz-SignedHeaders=host\"}}}]}\r\n655333c901abeb466effd6f1d61bd110f6a&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 0 B | 0 B/s 18:07:23.195588 trace git-lfs: tq: starting transfer adapter \"basic\"\r\n18:07:23.195998 trace git-lfs: xfer: adapter \"basic\" Begin() with 8 workers\r\n18:07:23.196062 trace git-lfs: xfer: adapter \"basic\" started\r\n18:07:23.196099 trace git-lfs: xfer: adapter \"basic\" worker 2 starting\r\n18:07:23.196118 trace git-lfs: xfer: adapter \"basic\" worker 0 starting\r\n18:07:23.196169 trace git-lfs: xfer: adapter \"basic\" worker 2 waiting for Auth\r\n18:07:23.196185 trace git-lfs: xfer: adapter \"basic\" worker 1 starting\r\n18:07:23.196151 trace git-lfs: xfer: adapter \"basic\" worker 4 starting\r\n18:07:23.196216 trace git-lfs: xfer: adapter \"basic\" worker 5 starting\r\n18:07:23.196257 trace git-lfs: xfer: adapter \"basic\" worker 5 waiting for Auth\r\n18:07:23.196248 trace git-lfs: xfer: adapter \"basic\" worker 4 waiting for Auth\r\n18:07:23.196288 trace git-lfs: xfer: adapter \"basic\" worker 0 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:07:23.196293 trace git-lfs: xfer: adapter \"basic\" worker 1 waiting for Auth\r\n18:07:23.196255 trace git-lfs: xfer: adapter \"basic\" worker 3 starting\r\n18:07:23.196290 trace git-lfs: xfer: adapter \"basic\" worker 6 starting\r\n18:07:23.196423 trace git-lfs: xfer: adapter \"basic\" worker 6 waiting for Auth\r\n18:07:23.196380 trace git-lfs: xfer: adapter \"basic\" worker 7 starting\r\n18:07:23.196458 trace git-lfs: xfer: adapter \"basic\" worker 7 waiting for Auth\r\n18:07:23.196420 trace git-lfs: xfer: adapter \"basic\" worker 3 waiting for Auth\r\n18:07:23.261193 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020723Z&X-Amz-Expires=900&X-Amz-Signature=2d3c1762e44b21f78c89a7c5a5f41655333c901abeb466effd6f1d61bd110f6a&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:07:23.499247 trace git-lfs: xfer: adapter \"basic\" worker 4 auth signal received\r\n18:07:23.499298 trace git-lfs: xfer: adapter \"basic\" worker 5 auth signal received\r\n18:07:23.499281 trace git-lfs: xfer: adapter \"basic\" worker 2 auth signal received\r\n18:07:23.499315 trace git-lfs: xfer: adapter \"basic\" worker 6 auth signal received\r\n18:07:23.499324 trace git-lfs: xfer: adapter \"basic\" worker 7 auth signal received\r\n18:07:23.499353 trace git-lfs: xfer: adapter \"basic\" worker 1 auth signal received\r\n18:07:23.499412 trace git-lfs: xfer: adapter \"basic\" worker 3 auth signal received\r\n18:07:34.596626 trace git-lfs: xfer: adapter \"basic\" worker 0 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" \r\n18:07:34.596706 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Put https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020723Z&X-Amz-Expires=900&X-Amz-Signature=2d3c1762e44b21f78c89a7c5a5f41655333c901abeb466effd6f1d61bd110f6a&X-Amz-SignedHeaders=host: write tcp 192.168.0.6:57346->52.216.242.102:443: write: broken pipe\r\n18:07:34.596761 trace git-lfs: tq: enqueue retry #1 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:07:34.596823 trace git-lfs: tq: sending batch of size 1\r\n18:07:34.596995 trace git-lfs: api: batch 1 files\r\n18:07:34.597180 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:07:34.597193 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:34.597208 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:07:34.925848 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:07:35 GMT\r\n< Etag: W/\"242-5zNHypYie/0vI3rttL7+btltlmQ\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:07:34.926039 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020735Z&X-Amz-Expires=900&X-Amz-Signature=602be765be6206f6363a93f156b88\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020735Z&X-Amz-Expires=900&X-Amz-Signature=602be765be6206f6363a93f156b8818:07:34.926220 trace git-lfs: HTTP: 37884d95b5a8f27d14e254dd7108b845cb7&X-Amz-SignedHeaders=host\"}}}]}\r\n37884d95b5a8f27d14e254dd7108b845cb7&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 742 KB/s 18:07:34.926411 trace git-lfs: xfer: adapter \"basic\" worker 4 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:07:34.926793 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020735Z&X-Amz-Expires=900&X-Amz-Signature=602be765be6206f6363a93f156b8837884d95b5a8f27d14e254dd7108b845cb7&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:07:44.835908 trace git-lfs: HTTP: 400 | 752 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:07:44 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: mDWWLDn2SM2srJXwqsVkEIAue+9F8wnupyuGkTAD4lcLKmDSSBa75zgKY7NXUC0X7QEMVwmPSVk=\r\n< X-Amz-Request-Id: B971E21D6254F404\r\n< \r\n18:07:44.836114 trace git-lfs: xfer: adapter \"basic\" worker 4 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" \r\n18:07:44.836157 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020735Z&X-Amz-Expires=900&X-Amz-Signature=602be765be6206f6363a93f156b8837884d95b5a8f27d14e254dd7108b845cb7&X-Amz-SignedHeaders=host\r\n18:07:44.836199 trace git-lfs: tq: enqueue retry #2 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:07:44.836238 trace git-lfs: tq: sending batch of size 1\r\n18:07:44.836355 trace git-lfs: api: batch 1 files\r\n18:07:44.836546 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:07:44.836556 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:44.836585 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:07:45.158001 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:07:45 GMT\r\n< Etag: W/\"242-m4CvhzTDqQPlc75+BedrFERvkE0\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:07:45.158145 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020745Z&X-Amz-Expires=900&X-Amz-Signature=82592319178cff3f2a02f404e3540\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020745Z&X-Amz-Expires=900&X-Amz-Signature=82592319178cff3f2a02f404e354018:07:45.158254 trace git-lfs: HTTP: 89a6ef22b79fe75e6f544e1786faf3e8f5e&X-Amz-SignedHeaders=host\"}}}]}\r\n89a6ef22b79fe75e6f544e1786faf3e8f5e&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 752 KB/s 18:07:45.158419 trace git-lfs: xfer: adapter \"basic\" worker 5 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:07:45.158794 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020745Z&X-Amz-Expires=900&X-Amz-Signature=82592319178cff3f2a02f404e354089a6ef22b79fe75e6f544e1786faf3e8f5e&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:07:55.959066 trace git-lfs: HTTP: 400 | 665 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:07:54 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: YPk1ZSL19/lW1Z7WxE/pTAyDK0Ny2ryDVCi1TZXtuT8Bh6itRmL4qO163dKG+s9yBSl8jyKRD7Y=\r\n< X-Amz-Request-Id: D0E50DEEB73DFA43\r\n< \r\n18:07:55.959368 trace git-lfs: xfer: adapter \"basic\" worker 5 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:07:55.959409 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020745Z&X-Amz-Expires=900&X-Amz-Signature=82592319178cff3f2a02f404e354089a6ef22b79fe75e6f544e1786faf3e8f5e&X-Amz-SignedHeaders=host\r\n18:07:55.959458 trace git-lfs: tq: enqueue retry #3 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:07:55.959490 trace git-lfs: tq: sending batch of size 1\r\n18:07:55.959582 trace git-lfs: api: batch 1 files\r\n18:07:55.959750 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:07:55.959768 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:07:55.959786 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"basic\",\"lfs-standalone-file\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:07:56.260024 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:07:56 GMT\r\n< Etag: W/\"242-31cowPk91NvaIaX84tjI/gLbdvo\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:07:56.260224 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020756Z&X-Amz-Expires=900&X-Amz-Signature=1c194a7990031e65288f0e7c59507\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020756Z&X-Amz-Expires=900&X-Amz-Signature=1c194a7990031e65288f0e7c5950718:07:56.260428 trace git-lfs: HTTP: 8387ee098bd5b3f3708e02085e3e9f6601a&X-Amz-SignedHeaders=host\"}}}]}\r\n8387ee098bd5b3f3708e02085e3e9f6601a&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 665 KB/s 18:07:56.260674 trace git-lfs: xfer: adapter \"basic\" worker 2 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:07:56.261037 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020756Z&X-Amz-Expires=900&X-Amz-Signature=1c194a7990031e65288f0e7c595078387ee098bd5b3f3708e02085e3e9f6601a&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:08:07.099441 trace git-lfs: HTTP: 400 | 567 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:08:06 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: e9blPqVAV5CVfFOylV29AzDODso+WNBEVIhJKKQc6NbEAMDeUCyJ5NKumhuM5P3i67O58fmm31g=\r\n< X-Amz-Request-Id: DFED315EE7523BFE\r\n< \r\n18:08:07.099632 trace git-lfs: xfer: adapter \"basic\" worker 2 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:07.099659 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020756Z&X-Amz-Expires=900&X-Amz-Signature=1c194a7990031e65288f0e7c595078387ee098bd5b3f3708e02085e3e9f6601a&X-Amz-SignedHeaders=host\r\n18:08:07.099701 trace git-lfs: tq: enqueue retry #4 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:08:07.099742 trace git-lfs: tq: sending batch of size 1\r\n18:08:07.099832 trace git-lfs: api: batch 1 files\r\n18:08:07.099999 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:08:07.100008 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:08:07.100024 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:08:07.441913 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:08:07 GMT\r\n< Etag: W/\"242-aR0wlUnNkp2RbtWgiEkJ7LUjpW0\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:08:07.442095 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020807Z&X-Amz-Expires=900&X-Amz-Signature=126779f211c325c10aba6be7bfc4b\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020807Z&X-Amz-Expires=900&X-Amz-Signature=126779f211c325c10aba6be7bfc4b18:08:07.442300 trace git-lfs: HTTP: 32fc8b2d03b037c48b35a864f81d0c3f11f&X-Amz-SignedHeaders=host\"}}}]}\r\n32fc8b2d03b037c48b35a864f81d0c3f11f&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 673 KB/s 18:08:07.442493 trace git-lfs: xfer: adapter \"basic\" worker 6 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:07.442893 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020807Z&X-Amz-Expires=900&X-Amz-Signature=126779f211c325c10aba6be7bfc4b32fc8b2d03b037c48b35a864f81d0c3f11f&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:08:18.357156 trace git-lfs: HTTP: 400 | 549 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:08:17 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: 18NzY2b209RdCK3nCS9J1AwpWxSPw7jRub8DLEosfO4JcG33iZ00V59ZRf/CwwCpEFS/G7xHPsI=\r\n< X-Amz-Request-Id: 8M6V1WEG2N8R8YAT\r\n< \r\n18:08:18.357367 trace git-lfs: xfer: adapter \"basic\" worker 6 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:18.357394 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020807Z&X-Amz-Expires=900&X-Amz-Signature=126779f211c325c10aba6be7bfc4b32fc8b2d03b037c48b35a864f81d0c3f11f&X-Amz-SignedHeaders=host\r\n18:08:18.357453 trace git-lfs: tq: enqueue retry #5 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:08:18.357489 trace git-lfs: tq: sending batch of size 1\r\n18:08:18.357602 trace git-lfs: api: batch 1 files\r\n18:08:18.357764 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:08:18.357773 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:08:18.357788 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"basic\",\"lfs-standalone-file\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:08:18.659856 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:08:18 GMT\r\n< Etag: W/\"242-wt34qjjMKH3OaOLKkwsE5YY47Uo\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:08:18.659952 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020818Z&X-Amz-Expires=900&X-Amz-Signature=4f4be5d714bb7cb2270c3c9934412\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020818Z&X-Amz-Expires=900&X-Amz-Signature=4f4be5d714bb7cb2270c3c993441218:08:18.660061 trace git-lfs: HTTP: cf1f62357f5a3fcde898603508194a423f1&X-Amz-SignedHeaders=host\"}}}]}\r\ncf1f62357f5a3fcde898603508194a423f1&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 549 KB/s 18:08:18.660225 trace git-lfs: xfer: adapter \"basic\" worker 7 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:18.660511 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020818Z&X-Amz-Expires=900&X-Amz-Signature=4f4be5d714bb7cb2270c3c9934412cf1f62357f5a3fcde898603508194a423f1&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:08:31.284958 trace git-lfs: HTTP: 400 | 415 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:08:30 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: EilC4w16RhqwexN8CgO2pXC5Vf5T7PUWS5lsntHalCkp603MmhbpjBtHiITw8NIYifaMK5cuY6U=\r\n< X-Amz-Request-Id: A0585EE068BDEB73\r\n< \r\n18:08:31.285190 trace git-lfs: xfer: adapter \"basic\" worker 7 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:31.285198 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020818Z&X-Amz-Expires=900&X-Amz-Signature=4f4be5d714bb7cb2270c3c9934412cf1f62357f5a3fcde898603508194a423f1&X-Amz-SignedHeaders=host\r\n18:08:31.285250 trace git-lfs: tq: enqueue retry #6 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:08:31.285284 trace git-lfs: tq: sending batch of size 1\r\n18:08:31.285391 trace git-lfs: api: batch 1 files\r\n18:08:31.285539 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:08:31.285549 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:08:31.285566 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:08:31.638814 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:08:31 GMT\r\n< Etag: W/\"242-7CB890z2UIC8LfHHmFvE0XNO8co\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:08:31.639032 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020831Z&X-Amz-Expires=900&X-Amz-Signature=577f945c7c793130dc45581c51367\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020831Z&X-Amz-Expires=900&X-Amz-Signature=577f945c7c793130dc45581c5136718:08:31.639183 trace git-lfs: HTTP: 2755ab7636ad209ef6755d542a332673930&X-Amz-SignedHeaders=host\"}}}]}\r\n2755ab7636ad209ef6755d542a332673930&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 415 KB/s 18:08:31.639442 trace git-lfs: xfer: adapter \"basic\" worker 1 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:31.639795 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020831Z&X-Amz-Expires=900&X-Amz-Signature=577f945c7c793130dc45581c513672755ab7636ad209ef6755d542a332673930&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:08:35.670792 trace git-lfs: HTTP: 400 | 442 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:08:34 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: 37wPu8zcJ6igY2DAtJ27Oaf5vcLhzCJStEw6bBpHK4QIwUFxcriAuVDuPgfYsUp5mOIqpGXYd5g=\r\n< X-Amz-Request-Id: 4040905E813EF937\r\n< \r\n18:08:35.670992 trace git-lfs: xfer: adapter \"basic\" worker 1 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:35.671009 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020831Z&X-Amz-Expires=900&X-Amz-Signature=577f945c7c793130dc45581c513672755ab7636ad209ef6755d542a332673930&X-Amz-SignedHeaders=host\r\n18:08:35.671057 trace git-lfs: tq: enqueue retry #7 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:08:35.671169 trace git-lfs: tq: sending batch of size 1\r\n18:08:35.671270 trace git-lfs: api: batch 1 files\r\n18:08:35.671422 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:08:35.671434 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:08:35.671449 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:08:35.978219 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:08:36 GMT\r\n< Etag: W/\"242-aBj4kp6nW/vZfASETDB6DUEmP80\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:08:35.978365 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020836Z&X-Amz-Expires=900&X-Amz-Signature=25e0ceea23657b4711a66397bf0e4\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020836Z&X-Amz-Expires=900&X-Amz-Signature=25e0ceea23657b4711a66397bf0e418:08:35.978471 trace git-lfs: HTTP: 274d53934c669794d8f31e5be4472d27493&X-Amz-SignedHeaders=host\"}}}]}\r\n274d53934c669794d8f31e5be4472d27493&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 442 KB/s 18:08:35.978651 trace git-lfs: xfer: adapter \"basic\" worker 3 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:35.978961 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020836Z&X-Amz-Expires=900&X-Amz-Signature=25e0ceea23657b4711a66397bf0e4274d53934c669794d8f31e5be4472d27493&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:08:47.217142 trace git-lfs: HTTP: 400 | 382 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:08:46 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: 5AEyU9ANTZA6eG2d4Y1XW5KAQ5XX9TsO5IKpThwbwvYh2x2neejx+SxYlt7ysbZ5ZZKRtOQhp0k=\r\n< X-Amz-Request-Id: CF55ABCF55095CE9\r\n< \r\n18:08:47.217330 trace git-lfs: xfer: adapter \"basic\" worker 3 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:47.217349 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020836Z&X-Amz-Expires=900&X-Amz-Signature=25e0ceea23657b4711a66397bf0e4274d53934c669794d8f31e5be4472d27493&X-Amz-SignedHeaders=host\r\n18:08:47.217399 trace git-lfs: tq: enqueue retry #8 for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\" (size: 11406640119)\r\n18:08:47.217432 trace git-lfs: tq: sending batch of size 1\r\n18:08:47.217524 trace git-lfs: api: batch 1 files\r\n18:08:47.217666 trace git-lfs: creds: git credential cache (\"https\", \"huggingface.co\", \"\")\r\n18:08:47.217675 trace git-lfs: Filled credentials for https://huggingface.co/allenai/unifiedqa-t5-3b\r\n18:08:47.217689 trace git-lfs: HTTP: POST https://huggingface.co/allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch\r\n> POST /allenai/unifiedqa-t5-3b.git/info/lfs/objects/batch HTTP/1.1\r\n> Host: huggingface.co\r\n> Accept: application/vnd.git-lfs+json; charset=utf-8\r\n> Authorization: Basic * * * * *\r\n> Content-Length: 205\r\n> Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n{\"operation\":\"upload\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119}],\"transfers\":[\"lfs-standalone-file\",\"basic\"],\"ref\":{\"name\":\"refs/heads/main\"}}18:08:47.576518 trace git-lfs: HTTP: 200\r\n\r\n\r\n< HTTP/1.1 200 OK\r\n< Content-Length: 578\r\n< Connection: keep-alive\r\n< Content-Type: application/vnd.git-lfs+json; charset=utf-8\r\n< Date: Thu, 12 Nov 2020 02:08:47 GMT\r\n< Etag: W/\"242-I6sTx/9B2Dp11gS7wtbjrP1c3lQ\"\r\n< Server: nginx/1.14.2\r\n< X-Powered-By: huggingface-moon\r\n< \r\n18:08:47.576645 trace git-lfs: HTTP: {\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020847Z&X-Amz-Expires=900&X-Amz-Signature=81eab82a86449f39b5dc223fbdf2b\r\n{\"transfer\":\"basic\",\"objects\":[{\"oid\":\"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\",\"size\":11406640119,\"authenticated\":true,\"actions\":{\"upload\":{\"href\":\"https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020847Z&X-Amz-Expires=900&X-Amz-Signature=81eab82a86449f39b5dc223fbdf2b18:08:47.576740 trace git-lfs: HTTP: d23ecd4834899033d6a896742ea480a1985&X-Amz-SignedHeaders=host\"}}}]}\r\nd23ecd4834899033d6a896742ea480a1985&X-Amz-SignedHeaders=host\"}}}]}Uploading LFS objects: 0% (0/1), 9.7 MB | 382 KB/s 18:08:47.576910 trace git-lfs: xfer: adapter \"basic\" worker 0 processing job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:47.577223 trace git-lfs: HTTP: PUT https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\r\n> PUT /lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020847Z&X-Amz-Expires=900&X-Amz-Signature=81eab82a86449f39b5dc223fbdf2bd23ecd4834899033d6a896742ea480a1985&X-Amz-SignedHeaders=host HTTP/1.1\r\n> Host: s3.amazonaws.com\r\n> Content-Length: 11406640119\r\n> Content-Type: application/zip\r\n> User-Agent: git-lfs/2.10.0 (GitHub; darwin amd64; go 1.13.6)\r\n> \r\n18:08:50.864302 trace git-lfs: HTTP: 400 | 278 KB/s \r\n\r\n\r\n< HTTP/1.1 400 Bad Request\r\n< Connection: close\r\n< Transfer-Encoding: chunked\r\n< Content-Type: application/xml\r\n< Date: Thu, 12 Nov 2020 02:08:49 GMT\r\n< Server: AmazonS3\r\n< X-Amz-Id-2: hV+PVm+Jl6JpvptNirGJM1ZhxunLPQcDUc0z0Ea053vMhwpgNMGs57y/qnEQFaL5ffAzrTmcfOI=\r\n< X-Amz-Request-Id: 7Z1MEY0MAV4T5NCY\r\n< \r\n18:08:50.864739 trace git-lfs: xfer: adapter \"basic\" worker 0 finished job for \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\"\r\n18:08:50.864774 trace git-lfs: tq: refusing to retry \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\", too many retries (8)\r\n18:08:50.864842 trace git-lfs: tq: refusing to retry \"7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386\", too many retries (8)\r\n18:08:50.864891 trace git-lfs: xfer: adapter \"basic\" End()\r\n18:08:50.864903 trace git-lfs: xfer: adapter \"basic\" worker 4 stopping\r\n18:08:50.864910 trace git-lfs: xfer: adapter \"basic\" worker 0 stopping\r\n18:08:50.864929 trace git-lfs: xfer: adapter \"basic\" worker 3 stopping\r\n18:08:50.864935 trace git-lfs: xfer: adapter \"basic\" worker 1 stopping\r\n18:08:50.864940 trace git-lfs: xfer: adapter \"basic\" worker 7 stopping\r\n18:08:50.864946 trace git-lfs: xfer: adapter \"basic\" worker 6 stopping\r\n18:08:50.864954 trace git-lfs: xfer: adapter \"basic\" worker 2 stopping\r\n18:08:50.864956 trace git-lfs: xfer: adapter \"basic\" worker 5 stopping\r\n18:08:50.865017 trace git-lfs: xfer: adapter \"basic\" stopped\r\nLFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020847Z&X-Amz-Expires=900&X-Amz-Signature=81eab82a86449f39b5dc223fbdf2bd23ecd4834899033d6a896742ea480a1985&X-Amz-SignedHeaders=host\r\nUploading LFS objects: 0% (0/1), 9.7 MB | 278 KB/s, done.\r\nerror: failed to push some refs to 'https://huggingface.co/allenai/unifiedqa-t5-3b'\r\n* Closing connection 0\r\n```\r\n\r\nI see lines like this that contain error messages, but not sure what they mean: \r\n```\r\n18:08:47.217349 trace git-lfs: tq: retrying object 7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386: LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T020836Z&X-Amz-Expires=900&X-Amz-Signature=25e0ceea23657b4711a66397bf0e4274d53934c669794d8f31e5be4472d27493&X-Amz-SignedHeaders=host\r\n```\r\n\r\nI also tried tweaking the git config parameters a bit, just in case they matter; but did not help. \r\n```\r\n$ git config --global lfs.transfer.maxretries 10\r\n$ git config --global lfs.dialtimeout 600000\r\n```\r\n", "Very weird, and unfortunately error messages from S3 aren't very informative (we're generating presigned upload urls to S3, in your log git-lfs actually tries to upload to S3).\r\n\r\nWhat kind of upload bandwidth do you have? (cc @pierrci) Can you share your pytorch_model.bin somewhere so that I can try pushing it in a clone of your model?\r\n\r\nThe one thing I'm thinking about is maybe if S3 presigned URLs expire while the upload is still underway, does it reject the download.", "@julien-c The model files are here: https://console.cloud.google.com/storage/browser/unifiedqa/tmp;tab=objects \r\n\r\n> What kind of upload bandwidth do you have?\r\n\r\nI am actually not sure how to answer this question. But my internet is quite reliable; never had any major issues with download/uploads. \r\n", "Ok @danyaljj, we can reproduce and will be working on a fix in the coming weeks.\r\n\r\nIn the meantime, do you want me to upload your models manually?", "> In the meantime, do you want me to upload your models manually? \r\n\r\nThat would be great! 🙏 ", "⚠️⚠️ For anyone else in the Hugging Face team (@patrickvonplaten notably) who might have to upload large models before we improve native support for large files (ETA = about 2 weeks), here's the current workaround (Reminder: previous workaround was simply `aws s3 cp` as the `transformers-cli` already had the same issue):\r\n\r\n- compute sha256 of large file with e.g. `sha256sum` (takes 3 mins on a beefy machine for 42GB t5-11b checkpoint)\r\n- copy the file to our lfs bucket, named with the sha256: `aws s3 cp pytorch_model.bin s3://lfs.huggingface.co/{model_id}/{sha256_from_above}`\r\n- clone the model repo you want to push to (with `GIT_LFS_SKIP_SMUDGE=1`) and write an LFS pointer file manually at the file's place, replacing the sha256 and the file size: example for t5-3b is https://huggingface.co/allenai/unifiedqa-t5-3b/blob/main/pytorch_model.bin\r\n- commit and push\r\n\r\nYou can then check that it worked, with (with lfs installed):\r\n\r\n```\r\ngit clone https://huggingface.co/{model_id}\r\n```\r\n\r\ncc @Pierrci @Narsil @thomwolf ", "Ok @danyaljj thanks for your patience 😄 \r\n\r\nFiles are uploaded at \r\nhttps://huggingface.co/allenai/unifiedqa-t5-11b/commits/main\r\nand\r\nhttps://huggingface.co/allenai/unifiedqa-t5-3b/commits/main\r\n\r\nI've checked that git clones work, though the clone takes a pretty long time for the 11b model :)\r\n\r\nLet me know if any issue.", "Appreciate the help, @julien-c 🙏 ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
CONTRIBUTOR
null
I am using the most recent release to upload a model. Like the new instructions suggested, I am using git to upload my files: ```bash $ git add --all Encountered 1 file(s) that may not have been copied correctly on Windows: pytorch_model.bin $ git status On branch main Your branch is up to date with 'origin/main'. Changes to be committed: (use "git restore --staged <file>..." to unstage) new file: pytorch_model.bin $ git commit -m 'update' [main 820bb7e] update 1 file changed, 3 insertions(+) create mode 100644 pytorch_model.bin $ git push Username for 'https://huggingface.co': danyaljj Password for 'https://[email protected]': LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/allenai/unifiedqa-t5-3b/7e295e01528dc6a361211884f82daac33c422089644d0f5b2ddb2d96166aa386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20201112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201112T015933Z&X-Amz-Expires=900&X-Amz-Signature=0115d52aa41e4a5e80315f03689f278c36f3f1d4961ee5544e8bb9b427d0ba7c&X-Amz-SignedHeaders=host Uploading LFS objects: 0% (0/1), 33 KB | 169 KB/s, done. error: failed to push some refs to 'https://huggingface.co/allenai/unifiedqa-t5-3b' ``` FYI, here are my versions: ```bash $ pip list | grep transformers transformers 3.5.0 ``` @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8480/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8479
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8479/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8479/comments
https://api.github.com/repos/huggingface/transformers/issues/8479/events
https://github.com/huggingface/transformers/pull/8479
741,189,946
MDExOlB1bGxSZXF1ZXN0NTE5NTQ4NjQ4
8,479
Fix SqueezeBERT for masked language model
{ "login": "forresti", "id": 2020010, "node_id": "MDQ6VXNlcjIwMjAwMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2020010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forresti", "html_url": "https://github.com/forresti", "followers_url": "https://api.github.com/users/forresti/followers", "following_url": "https://api.github.com/users/forresti/following{/other_user}", "gists_url": "https://api.github.com/users/forresti/gists{/gist_id}", "starred_url": "https://api.github.com/users/forresti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forresti/subscriptions", "organizations_url": "https://api.github.com/users/forresti/orgs", "repos_url": "https://api.github.com/users/forresti/repos", "events_url": "https://api.github.com/users/forresti/events{/privacy}", "received_events_url": "https://api.github.com/users/forresti/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This corrects a mistake in the implementation of SqueezeBertForMaskedLM. Fixes #8277 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes: https://github.com/huggingface/transformers/issues/8277 - [x] Did you make sure to update the documentation with your changes? Here are the - [ ] Did you write any new necessary tests? _No tests added._ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @sgugger @LysandreJik @ontocord
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8479/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8479", "html_url": "https://github.com/huggingface/transformers/pull/8479", "diff_url": "https://github.com/huggingface/transformers/pull/8479.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8479.patch", "merged_at": 1605201578000 }
https://api.github.com/repos/huggingface/transformers/issues/8478
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8478/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8478/comments
https://api.github.com/repos/huggingface/transformers/issues/8478/events
https://github.com/huggingface/transformers/pull/8478
741,183,012
MDExOlB1bGxSZXF1ZXN0NTE5NTQzMzQz
8,478
[s2s] finetune.py: specifying generation min_length
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM, but will break CI. no idea why it's not running.\r\nYou will need to update the tests. I bet `pytest examples/seq2seq/test_seq2seq_examples.py` will fail ( you should fix that).\r\n", "Yeah, here is the error I am seeing. \r\n\r\n```\r\n$ pytest examples/seq2seq/test_seq2seq_examples.py\r\ncomet_ml is installed but `COMET_API_KEY` is not set.\r\nTraceback (most recent call last):\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 430, in _importconftest\r\n return self._conftestpath2mod[conftestpath]\r\nKeyError: local('/Users/danielk/ideaProjects/transformers/examples/conftest.py')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/danielk/opt/anaconda3/bin/pytest\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 58, in main\r\n config = _prepareconfig(args, plugins)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 196, in _prepareconfig\r\n pluginmanager=pluginmanager, args=args\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py\", line 286, in __call__\r\n return self._hookexec(self, self.get_hookimpls(), kwargs)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 92, in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 86, in <lambda>\r\n firstresult=hook.spec.opts.get(\"firstresult\") if hook.spec else False,\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 203, in _multicall\r\n gen.send(outcome)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/helpconfig.py\", line 93, in pytest_cmdline_parse\r\n config = outcome.get_result()\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 80, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 187, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 675, in pytest_cmdline_parse\r\n self.parse(args)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 845, in parse\r\n self._preparse(args, addopts=addopts)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 809, in _preparse\r\n early_config=self, args=args, parser=self._parser\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py\", line 286, in __call__\r\n return self._hookexec(self, self.get_hookimpls(), kwargs)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 92, in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 86, in <lambda>\r\n firstresult=hook.spec.opts.get(\"firstresult\") if hook.spec else False,\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 208, in _multicall\r\n return outcome.get_result()\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 80, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 187, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 719, in pytest_load_initial_conftests\r\n self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 379, in _set_initial_conftests\r\n self._try_load_conftest(current)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 382, in _try_load_conftest\r\n self._getconftestmodules(anchor)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 414, in _getconftestmodules\r\n mod = self._importconftest(conftestpath)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 464, in _importconftest\r\n self.consider_conftest(mod)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 492, in consider_conftest\r\n self.register(conftestmodule, name=conftestmodule.__file__)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py\", line 306, in register\r\n ret = super(PytestPluginManager, self).register(plugin, name)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 126, in register\r\n hook._maybe_apply_history(hookimpl)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py\", line 333, in _maybe_apply_history\r\n res = self._hookexec(self, [method], kwargs)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 92, in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py\", line 86, in <lambda>\r\n firstresult=hook.spec.opts.get(\"firstresult\") if hook.spec else False,\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 208, in _multicall\r\n return outcome.get_result()\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 80, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py\", line 187, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/Users/danielk/ideaProjects/transformers/examples/conftest.py\", line 20, in pytest_addoption\r\n from transformers.testing_utils import pytest_addoption_shared\r\n File \"/Users/danielk/ideaProjects/transformers/src/transformers/__init__.py\", line 135, in <module>\r\n from .pipelines import (\r\n File \"/Users/danielk/ideaProjects/transformers/src/transformers/pipelines.py\", line 38, in <module>\r\n from .tokenization_auto import AutoTokenizer\r\n File \"/Users/danielk/ideaProjects/transformers/src/transformers/tokenization_auto.py\", line 119, in <module>\r\n from .tokenization_albert_fast import AlbertTokenizerFast\r\n File \"/Users/danielk/ideaProjects/transformers/src/transformers/tokenization_albert_fast.py\", line 23, in <module>\r\n from .tokenization_utils_fast import PreTrainedTokenizerFast\r\n File \"/Users/danielk/ideaProjects/transformers/src/transformers/tokenization_utils_fast.py\", line 30, in <module>\r\n from .convert_slow_tokenizer import convert_slow_tokenizer\r\n File \"/Users/danielk/ideaProjects/transformers/src/transformers/convert_slow_tokenizer.py\", line 25, in <module>\r\n from tokenizers.models import BPE, Unigram, WordPiece\r\nImportError: cannot import name 'Unigram' from 'tokenizers.models' (/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/tokenizers/models/__init__.py)\r\n```\r\n\r\nLikely caused by previous changes? \r\n", "The above error comes from a wrong version of `tokenizers` being installed.\r\n\r\nCould you \r\n```\r\npip install --upgrade tokenizers\r\n```\r\n\r\nand re-run the tests? ", "Have updated the branch now and I *think* the previous error has gone away. ", "Looks good to me. @patil-suraj can you take a final look and merge if you want?", "Something is wrong with this PR, \r\n\r\n1) didn't go through autoformatters\r\n2) failing CI on current master. e.g. see: https://app.circleci.com/pipelines/github/huggingface/transformers/16336/workflows/09f5b053-9f0e-4f70-aae8-3b31c79227f0/jobs/125984 (from unrelated recent PR https://github.com/huggingface/transformers/pull/8798)\r\n3) It looks like CI has never passed on this PR yet was merged - odd\r\n", "Hey @danyaljj - sorry we merged your PR too early without exactly checking whether everything was fine or not. A couple of tests were actually failing on master due to the merge of this PR and I just reverted the PR. \r\n\r\nCould you maybe open a new PR and we'll all make sure this time that all tests pass? :-) \r\n\r\nSorry for the inconvenience! The mistake is definitely on us here!", "post mortem - for some reason this PR had no indication of CI pass/fail - one can only see its status in the merge https://github.com/huggingface/transformers/commit/5aa361f3e56de0f65720f291bb3975bfc98f2837, which fails 3 CIs. \r\n\r\nSo this was definitely an odd situation and probably some bug in CI software itself.", "The tests are failing because in s2s tests the `args` are directly passed to `finetune.py`'s `main` function and the newly added `eval_min_gen_length` is not included in it. \r\n\r\nTwo changes to make the tests pass\r\n1. add `eval_min_gen_length` in `CHEAP_ARGS` `dict` here https://github.com/huggingface/transformers/blob/master/examples/seq2seq/test_seq2seq_examples.py#L30\r\n2. run `make style`" ]
1,605
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Adds an argument to `finetune.py` to specify min length for text generation. Related to: https://github.com/huggingface/transformers/issues/5142#issuecomment-724938595 https://github.com/huggingface/transformers/issues/7796#issuecomment-709348940 ## Who can review? @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8478/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8478", "html_url": "https://github.com/huggingface/transformers/pull/8478", "diff_url": "https://github.com/huggingface/transformers/pull/8478.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8478.patch", "merged_at": 1606374183000 }
https://api.github.com/repos/huggingface/transformers/issues/8477
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8477/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8477/comments
https://api.github.com/repos/huggingface/transformers/issues/8477/events
https://github.com/huggingface/transformers/issues/8477
741,170,843
MDU6SXNzdWU3NDExNzA4NDM=
8,477
How to print out the probability for each bean search result in gpt2 text generator?
{ "login": "beibeic", "id": 12088194, "node_id": "MDQ6VXNlcjEyMDg4MTk0", "avatar_url": "https://avatars.githubusercontent.com/u/12088194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/beibeic", "html_url": "https://github.com/beibeic", "followers_url": "https://api.github.com/users/beibeic/followers", "following_url": "https://api.github.com/users/beibeic/following{/other_user}", "gists_url": "https://api.github.com/users/beibeic/gists{/gist_id}", "starred_url": "https://api.github.com/users/beibeic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beibeic/subscriptions", "organizations_url": "https://api.github.com/users/beibeic/orgs", "repos_url": "https://api.github.com/users/beibeic/repos", "events_url": "https://api.github.com/users/beibeic/events{/privacy}", "received_events_url": "https://api.github.com/users/beibeic/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Take a look at https://github.com/huggingface/transformers/issues/5164", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
# 🚀 I want to see the probability for the text result generated by gpt2 model <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> See a sample code as below, when we used a pre-trained model, can we also print out the probability outcome with each beam_output? beam_outputs = model.generate( input_ids, max_length=50, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) for i, beam_output in enumerate(beam_outputs): **print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True)))** <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8477/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8476
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8476/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8476/comments
https://api.github.com/repos/huggingface/transformers/issues/8476/events
https://github.com/huggingface/transformers/issues/8476
741,166,156
MDU6SXNzdWU3NDExNjYxNTY=
8,476
Trainer runs out of memory when computing eval score
{ "login": "soufianeelalami", "id": 16280778, "node_id": "MDQ6VXNlcjE2MjgwNzc4", "avatar_url": "https://avatars.githubusercontent.com/u/16280778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soufianeelalami", "html_url": "https://github.com/soufianeelalami", "followers_url": "https://api.github.com/users/soufianeelalami/followers", "following_url": "https://api.github.com/users/soufianeelalami/following{/other_user}", "gists_url": "https://api.github.com/users/soufianeelalami/gists{/gist_id}", "starred_url": "https://api.github.com/users/soufianeelalami/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soufianeelalami/subscriptions", "organizations_url": "https://api.github.com/users/soufianeelalami/orgs", "repos_url": "https://api.github.com/users/soufianeelalami/repos", "events_url": "https://api.github.com/users/soufianeelalami/events{/privacy}", "received_events_url": "https://api.github.com/users/soufianeelalami/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not sure what the bug is: by requiring the complete predictions for your `compute_metrics` function, you are asking for an array of 4,057 by 200 by vocab_size (which for the base CamemBERT model is 30,522 I believe). This does not fit easily in RAM.\r\n", "Is there another way to compute the metrics (or an estimation) without having to build such a huge vector ?", "You haven't shared what metric you are using so I have no idea.", "This the function i'm using:\r\n\r\n```python\r\nfrom sklearn.metrics import precision_recall_fscore_support\r\ndef compute_metrics(p: EvalPrediction) -> Dict:\r\n #print('raw_predictions: ', p.predictions, '\\n')\r\n #print('labels: ', p.label_ids,'\\n')\r\n preds = np.argmax(p.predictions, axis=-1)\r\n #print('shape:', preds.shape, '\\n')\r\n precision, recall, f1, _ = precision_recall_fscore_support(p.label_ids.flatten(), preds.flatten(), average='weighted', zero_division=0)\r\n return {\r\n 'accuracy': (preds == p.label_ids).mean(),\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\n```", "I guess you could write your custom loop to store the predictions after the argmax together, this won't blow up memory the same way.", "Great, thanks a lot for the tip !\r\n\r\nI ll mark the issue as closed.", "@soufianeelalami Did you come up with a solution for this issue? Our team has run into the same issue with `nested_conat` while evaluating on a fairly large dataset.", "@gphillips-ema Hello, basically what you need to do is create your trainer class (which inherits from the trainer class) then override the ```prediction_loop```method to change one particular behavior:\r\n```python\r\nif logits is not None:\r\n #preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)\r\n logits_reduced = np.argmax(logits, axis=-1)\r\n preds_host = logits_reduced if preds_host is None else nested_concat(preds_host, logits_reduced, padding_index=-100)\r\n```\r\nYou need to do a ```np.argmax(logits, axis=-1)``` to reduce the dimension of the output logit vector. \r\n\r\nIf you are using accumulation, then you need to do the same thing in that part of the code (always in the ```prediction_loop```method).\r\n\r\nPlease let me know if this solves your problem or if you need any help.", "I was facing a related issues with `nested_concat` that caused GPU memory errors. Using the `Seq2SeqTrainer` instead of the default Trainer solved the issue for me, since does not rely on concatenating the logits over the vocabulary. ", "Same issue, I got an A5000 gpu for training, but I can't even eval with batch_size=8." ]
1,605
1,660
1,605
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> Trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): Camembert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce I am trying to finetune a Camembert Model for a mlm task. This is the configuration i am using: ```python training_args = TrainingArguments( seed=92, output_dir='./results', # output directory disable_tqdm=False, prediction_loss_only=False, num_train_epochs=3, # total number of training epochs learning_rate=1e-4, evaluation_strategy='steps', per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation eval_steps = 25, logging_dir='./logs', # directory for storing logs logging_steps=5, ) data_collator = DataCollatorForLanguageModeling(tokenizer=TOKENIZER, mlm=True, mlm_probability=0.15) trainer = Trainer( model=MODEL, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics = compute_metrics ) ``` Steps to reproduce the behavior: 1. Load a train and validation dataset. 2. Define a compute_metrics function for evaluation. 3. evaluation works at the beginning but it raises a ```RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 57680691200 bytes. Error code 12 (Cannot allocate memory)``` when trying to run the ```nested_concat``` function inside the ```prediction_loop```. ``` /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in prediction_loop(self, dataloader, description, prediction_loss_only) 1420 losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0) 1421 if logits is not None: -> 1422 preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) 1423 if labels is not None: 1424 labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100) /usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index) 84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) 85 elif isinstance(tensors, torch.Tensor): ---> 86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) 87 elif isinstance(tensors, np.ndarray): 88 return numpy_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) /usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in torch_pad_and_concatenate(tensor1, tensor2, padding_index) 52 53 # Now let's fill the result tensor ---> 54 result = tensor1.new_full(new_shape, padding_index) 55 result[: tensor1.shape[0], : tensor1.shape[1]] = tensor1 56 result[tensor1.shape[0] :, : tensor2.shape[1]] = tensor2 RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 57680691200 bytes. Error code 12 (Cannot allocate memory) ``` The machine i am using has 120Gb of RAM. The data contains 20355 sentences with the max number of words in a sentence inferior to 200. The dataset fits easily in the RAM. The subset used for evaluation contains 4057 examples with the same structure as the training dataset. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior It seems that setting ```prediction_loss_only=True``` avoids the problem as it does not compute evaluation metrics and only loss metric, so it costs much lower RAM to compute. The downside obviously is that you dont get any evaluation metrics. The Trainer should be able to handle the workload as we go further in evaluation steps. Maybe clearing heavy variables in the evaluation process might help avoid blowing up RAM by storing values that are too large. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8476/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8476/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8475
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8475/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8475/comments
https://api.github.com/repos/huggingface/transformers/issues/8475/events
https://github.com/huggingface/transformers/pull/8475
741,134,454
MDExOlB1bGxSZXF1ZXN0NTE5NTAyNTg3
8,475
Update deploy-docs dependencies on CI to enable Flax
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
MEMBER
null
Signed-off-by: Morgan Funtowicz <[email protected]>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8475/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8475", "html_url": "https://github.com/huggingface/transformers/pull/8475", "diff_url": "https://github.com/huggingface/transformers/pull/8475.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8475.patch", "merged_at": 1605137502000 }
https://api.github.com/repos/huggingface/transformers/issues/8474
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8474/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8474/comments
https://api.github.com/repos/huggingface/transformers/issues/8474/events
https://github.com/huggingface/transformers/pull/8474
741,102,627
MDExOlB1bGxSZXF1ZXN0NTE5NDc2ODUw
8,474
Fix on "examples/language-modeling" to support more datasets
{ "login": "zeyuyun1", "id": 43428393, "node_id": "MDQ6VXNlcjQzNDI4Mzkz", "avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zeyuyun1", "html_url": "https://github.com/zeyuyun1", "followers_url": "https://api.github.com/users/zeyuyun1/followers", "following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}", "gists_url": "https://api.github.com/users/zeyuyun1/gists{/gist_id}", "starred_url": "https://api.github.com/users/zeyuyun1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zeyuyun1/subscriptions", "organizations_url": "https://api.github.com/users/zeyuyun1/orgs", "repos_url": "https://api.github.com/users/zeyuyun1/repos", "events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}", "received_events_url": "https://api.github.com/users/zeyuyun1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great fix, thanks!" ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? Fix on "run_clm.py", "run_mlm.py", "run_plm.py", so that they can support datasets with more than one features. Before they will fail on datasets with more than one features. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8474/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8474", "html_url": "https://github.com/huggingface/transformers/pull/8474", "diff_url": "https://github.com/huggingface/transformers/pull/8474.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8474.patch", "merged_at": 1605192429000 }
https://api.github.com/repos/huggingface/transformers/issues/8473
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8473/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8473/comments
https://api.github.com/repos/huggingface/transformers/issues/8473/events
https://github.com/huggingface/transformers/issues/8473
741,070,726
MDU6SXNzdWU3NDEwNzA3MjY=
8,473
Support fp16 for inference
{ "login": "urimerhav", "id": 9450187, "node_id": "MDQ6VXNlcjk0NTAxODc=", "avatar_url": "https://avatars.githubusercontent.com/u/9450187?v=4", "gravatar_id": "", "url": "https://api.github.com/users/urimerhav", "html_url": "https://github.com/urimerhav", "followers_url": "https://api.github.com/users/urimerhav/followers", "following_url": "https://api.github.com/users/urimerhav/following{/other_user}", "gists_url": "https://api.github.com/users/urimerhav/gists{/gist_id}", "starred_url": "https://api.github.com/users/urimerhav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/urimerhav/subscriptions", "organizations_url": "https://api.github.com/users/urimerhav/orgs", "repos_url": "https://api.github.com/users/urimerhav/repos", "events_url": "https://api.github.com/users/urimerhav/events{/privacy}", "received_events_url": "https://api.github.com/users/urimerhav/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I imagine this only happens when the `generate()` method is used under the hood? ", "Hi there! It's true you can't just do `model.half()` for generation. There is nothing in Trainer/Seq2SeqTrainer right now for FP16-inference, only training, but we're looking at it right now through #8403. So stay tuned!", "Thanks for the input @sgugger. Good to know we're not missing something here and it's actually unsupported somehow. ", "Hi,\r\nI've noticed the same issue of the model randomly generating junk when using autocast within a custom generate() method with the only change below (fp16 is a boolean). From the above comments I thought this approach should've worked.\r\n```\r\nif fp16:\r\n\twith torch.cuda.amp.autocast():\r\n\t\toutputs = self(**model_inputs)\r\nelse:\r\n\toutputs = self(**model_inputs)\r\n```\r\n\r\nThe current model I've tested it on is a huggingface gpt2 model finetuned on a personal dataset. Without fp16 the generate works perfectly. The dataset is very specific and the model is supposed to generate symbols+numbers, so it's clear when it starts spitting out words during fp16 inference.\r\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "has this been solved?" ]
1,605
1,633
1,614
NONE
null
# 🚀 Feature request - support fp16 inference Right now most models support mixed precision for model training, but not for inference. Naively calling `model= model.haf()` makes the model generate junk instead of valid results for text generation, even though mixed-precision works fine in training. If there's a way to make the model produce stable behavior at 16-bit precision at inference, the throughput can potentially double on most modern GPUS. ## Motivation Double the speed is always attractive, especially since transformers are compute-intensive.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8473/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8473/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8472
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8472/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8472/comments
https://api.github.com/repos/huggingface/transformers/issues/8472/events
https://github.com/huggingface/transformers/issues/8472
741,068,616
MDU6SXNzdWU3NDEwNjg2MTY=
8,472
GPT2 (pre-trained not fine-tuned) only generates additional special tokens
{ "login": "al3xpapangelis", "id": 68122121, "node_id": "MDQ6VXNlcjY4MTIyMTIx", "avatar_url": "https://avatars.githubusercontent.com/u/68122121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/al3xpapangelis", "html_url": "https://github.com/al3xpapangelis", "followers_url": "https://api.github.com/users/al3xpapangelis/followers", "following_url": "https://api.github.com/users/al3xpapangelis/following{/other_user}", "gists_url": "https://api.github.com/users/al3xpapangelis/gists{/gist_id}", "starred_url": "https://api.github.com/users/al3xpapangelis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/al3xpapangelis/subscriptions", "organizations_url": "https://api.github.com/users/al3xpapangelis/orgs", "repos_url": "https://api.github.com/users/al3xpapangelis/repos", "events_url": "https://api.github.com/users/al3xpapangelis/events{/privacy}", "received_events_url": "https://api.github.com/users/al3xpapangelis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@g-karthik ", "Was able to reproduce this as well.\r\n\r\nFrom the observation, I suspect the random weights being initialized for the added tokens in the final `Linear` and/or `SequenceSummary` head are such that no matter what hidden state is sent in, a special token gets the highest final scalar score. Haven't dived in to check how the random initialization is done, but if it's done from a standard unit Gaussian, I would imagine this cannot happen at every single time-step.", "You should fine-tune your model on a dataset containing your added tokens, otherwise the model will very probably generate gibberish.", "@LysandreJik The model is only generating special tokens. It does not generate any of the original tokens in the pre-trained model's vocabulary. I'd understand if there were special tokens generated occasionally, but that's not the case.", "I understand. By adding new tokens, you're resizing the token embedding layer with some *randomly initialized values*. These values can be of an entirely different dimension to the ones currently initialized in your token embedding layer, which can lead to these tokens being overly generated.\r\n\r\nAs I said before: you should fine-tune your model on a dataset containing your added tokens, otherwise the model will very probably generate gibberish.", "I'm aware that fine-tuning on a dataset containing the added tokens will bring those \"entirely different dimension\" (as you call them) values back to the \"same dimension\". But that's besides the point here. We're talking about expected behavior.\r\n\r\n> By adding new tokens, you're resizing the token embedding layer with some randomly initialized values\r\n\r\nIt's not just the token embedding layer that'll get resized though, right? There's the final `Linear` that outputs the distribution over the vocabulary as well that would have to be adjusted to account for the new vocabulary size.\r\n\r\n> These values can be of an entirely different dimension to the ones currently initialized in your token embedding layer\r\n\r\nCan you please point me to the code that does this \"entirely different dimension\" random initialization for the added tokens? If the argument you're making is that a pre-trained model *should* generate *only-special-tokens* gibberish if special tokens were added to its vocabulary and it were resized accordingly, then I disagree with it. I would expect a mix of both, and if the random initialization can be altered to ensure the pre-trained model's behavior with and without added special tokens is *mostly* similar, that would be the best outcome for consistency. That's why I brought up the point about random initialization from a unit Gaussian earlier in this issue.", "> It's not just the token embedding layer that'll get resized though, right? There's the final Linear that outputs the distribution over the vocabulary as well that would have to be adjusted to account for the new vocabulary size.\r\n\r\nYes, you are right. However, the embedding layer and output linear layer are shared. They have the same values, as they are tied. Resizing the embedding layer resizes the output layer at the same time.\r\nThis is done in the [_tie_or_clone_weights method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L571), which is called by the [init_weights method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L705).\r\n\r\n> Can you please point me to the code that does this \"entirely different dimension\" random initialization for the added tokens?\r\n\r\n[Here it is](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L341-L351).\r\n\r\n> If the argument you're making is that a pre-trained model should generate only-special-tokens gibberish if special tokens were added to its vocabulary and it were resized accordingly, then I disagree with it.\r\n\r\nI am not making this argument. The argument I'm making is that if you're adding *any* token to an embedding matrix **without** training the model afterwards, you will obtain gibberish.", "> The argument I'm making is that if you're adding any token to an embedding matrix without training the model afterwards, you will obtain gibberish.\r\n\r\nWhat @al3xpapangelis and I are saying is that a pre-trained model's behavior *before* and *after* adding *any* new token, with no *further* training done beyond the original pre-training, should be the same *on average* for the same input. We're talking about expected behavior vs. actual behavior, and what can be done to make them the same. Your argument makes sense to me, but only in the scenario where the model's not been trained originally at all.\r\n\r\nThanks for the pointers! Looks like the random initialization for `Linear` is from a 0-centered, 0.02 std. deviation Gaussian. I'll do some analysis to see how vectors from this distribution \"vary\" on average from a pre-trained embedding for a regular token.", "Hey @g-karthik and hey @al3xpapangelis (cool to see you here again :-) ), \r\n\r\nThe reason for this behavior is mainly because `lm_head` is tied to the word_embedding matrix and therefore the softmax over the output logit vectors seems to give very high values to the randomly init tokens. So this seems to suggest the distribution of trained logit vectors is very different from the rnd init ones.\r\n\r\nI'd also suggest to play around with changing the init scheme for new tokens or just setting the newly added tokens manually to some better value:\r\n\r\n```python\r\nmodel.lm_head.weight[:, -2] = # good init vector for <USER>\r\nmodel.lm_head.weight[:, -1] = # good init vector for <SYSTEM>\r\n```\r\n\r\nIf you do this for example, the tokens won't be generated.\r\n\r\n```python\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')\r\ntokenizer.add_special_tokens(\r\n\t{'additional_special_tokens': ['<USER>', '<SYSTEM>']}\r\n)\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('distilgpt2')\r\nmodel.resize_token_embeddings(len(tokenizer))\r\ninp_tok_ids = tokenizer.encode('I want a pepperoni pizza with mushroom')\r\ninp_tensor = torch.LongTensor(inp_tok_ids).unsqueeze(0)\r\nmodel.eval()\r\n\r\nmodel.lm_head.weight[-2, :] = (torch.zeros((768,)) - 10000.0) \r\nmodel.lm_head.weight[-1, :] = (torch.zeros((768,)) - 10000.0) \r\n\r\nwith torch.no_grad():\r\n\tfor i in range(10):\r\n\t\toutputs = model(inp_tensor)\r\n\t\tlogits = outputs[0][:, -1, :]\r\n\t\tprobs = F.softmax(logits, dim=-1)\r\n\t\tnext_token = torch.multinomial(probs, num_samples=1).squeeze(1)\r\n\t\tinp_tensor = torch.cat([inp_tensor, next_token.unsqueeze(-1)], dim=-1)\r\n\r\nprint(tokenizer.decode(inp_tensor[0]))\r\n```", "@patrickvonplaten yes, I was thinking I'll try and estimate the mean and covariance of the set of values in GPT-2's pre-trained embeddings (across all of its 4 model sizes), assuming a Gaussian distribution. And then update the random initialization's mean and std. dev. accordingly in the model's [`_init_weights()`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L346). That way, the random initialization comes from a distribution that's effectively \"similar\" to that of the pre-trained vectors, and hence decoding sequences would result in a mixture of the original tokens and added tokens.", "Thanks @LysandreJik and @patrickvonplaten! I like @g-karthik suggestion, it would be nice for this bevahiour to happen automatically", "i found the same problem while i was trying to fine tune gpt2 , and here is how i have solved it: \r\n\r\n```\r\n# special tokens are defined\r\nadd_spe_tokens = ['<|ANS|>']\r\n\r\nspecial_tokens_dict = {'eos_token': '<|EOS|>', \r\n 'bos_token': '<|SOS|>', \r\n 'pad_token': '<|PAD|>', \r\n 'additional_special_tokens':add_spe_tokens}\r\n\r\n# the new token is added to the tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\nprint(f\"bos_token_id: {tokenizer.bos_token_id}\")\r\nprint(f\"eos_token_id: {tokenizer.eos_token_id}\")\r\nprint(f\"pad_token_id: {tokenizer.pad_token_id}\")\r\n\r\n\r\n# model configuration to which we add the special tokens\r\nconfig = AutoConfig.from_pretrained('gpt2', \r\n bos_token_id=tokenizer.bos_token_id,\r\n eos_token_id=tokenizer.eos_token_id,\r\n pad_token_id=tokenizer.pad_token_id,\r\n output_hidden_states=False)\r\n\r\n# we load the pre-trained model with custom settings\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2', config=config)\r\n\r\n# model embeding resizing\r\nprint(\"before : \", model.lm_head.weight.shape)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\nnew_weights = torch.cat([model.lm_head.weight[:-4, :], torch.zeros(4, model.lm_head.weight.shape[1]) -10000])\r\nmodel.lm_head.weight = torch.nn.Parameter(new_weights) \r\n\r\n\r\nprint(\"after : \", model.lm_head.weight.shape)\r\n```\r\n\r\n```\r\ninput_text = \"Once upon a time,\"\r\n\r\ninput_ids = tokenizer.encode(input_text, return_tensors=\"pt\")\r\nprint(input_ids)\r\noutput = model.generate(input_ids, max_length=50, num_return_sequences=1)\r\n\r\nprint(output)\r\n# Decode and print the generated text\r\n# generated_text = tokenizer.decode(output[0], skip_special_tokens=True)\r\n# print(generated_text)\r\n```", "but i am not sure if this will ruin the pre-trained weights while fine tuning or not " ]
1,605
1,698
1,605
NONE
null
## Environment info - `transformers` version: 3.5.0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.6.3 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information Model I am using (GPT2 / DistilGPT2): The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) I'm using GPT2 or DistilGPT2 on MetalWOZ and the issue I'm having is when I add special tokens (even bos, eos, etc) and prompt the model, it only generates those special tokens - no other token. For example, if I add the tokens <USER> and <SYSTEM> and prompt the model with: "I want a pepperoni pizza with mushroom" I get: "I want a pepperoni pizza with mushroom <USER> <USER> <USER> <SYSTEM> <USER> <USER> <USER> <SYSTEM> <USER> <USER>" ## To reproduce Steps to reproduce the behavior: 1. Add special tokens to a GPT2 model (example below with distilgpt2 but I get the same behavior with gpt2) 2. Resize embeddings 3. Prompt model ``` import torch import torch.nn.functional as F from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') tokenizer.add_special_tokens( {'additional_special_tokens': ['<USER>', '<SYSTEM>']} ) model = GPT2LMHeadModel.from_pretrained('distilgpt2') model.resize_token_embeddings(len(tokenizer)) inp_tok_ids = tokenizer.encode('I want a pepperoni pizza with mushroom') inp_tensor = torch.LongTensor(inp_tok_ids).unsqueeze(0) model.eval() with torch.no_grad(): for i in range(10): outputs = model(inp_tensor) logits = outputs[0][:, -1, :] probs = F.softmax(logits, dim=-1) next_token = torch.multinomial(probs, num_samples=1).squeeze(1) inp_tensor = torch.cat([inp_tensor, next_token.unsqueeze(-1)], dim=-1) print(tokenizer.decode(inp_tensor[0])) ``` ## Expected behavior I would expect a mix of the new special tokens and other tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8472/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8471
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8471/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8471/comments
https://api.github.com/repos/huggingface/transformers/issues/8471/events
https://github.com/huggingface/transformers/issues/8471
740,936,088
MDU6SXNzdWU3NDA5MzYwODg=
8,471
TFBertForTokenClassification scoring only O labels on a NER task
{ "login": "Zast996", "id": 62074263, "node_id": "MDQ6VXNlcjYyMDc0MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/62074263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zast996", "html_url": "https://github.com/Zast996", "followers_url": "https://api.github.com/users/Zast996/followers", "following_url": "https://api.github.com/users/Zast996/following{/other_user}", "gists_url": "https://api.github.com/users/Zast996/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zast996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zast996/subscriptions", "organizations_url": "https://api.github.com/users/Zast996/orgs", "repos_url": "https://api.github.com/users/Zast996/repos", "events_url": "https://api.github.com/users/Zast996/events{/privacy}", "received_events_url": "https://api.github.com/users/Zast996/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,605
1,605
1,605
NONE
null
I'm using TFBertForTokenClassification to perform a NER task on the annotated corpus fo NER: [https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus](https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus). The problem is that the O-Labels are the majority of all labels, then the accuracy is quite high as the model correctly predicts most of them. So, when I try to predict the labels of a simple sentence, the network predict only the O Label for each token of the sentence, however in several tutorials in which it is used Pytorch (I am using Tensorflow), the predictions are good. Probably there is a problem in my code, but I cannot figure out where is it. The code is the following: ```python # Import libraries import tensorflow as tf import pandas as pd from sklearn.model_selection import train_test_split import math import numpy as np from transformers import ( TF2_WEIGHTS_NAME, BertConfig, BertTokenizer, TFBertForTokenClassification, create_optimizer) ``` ```python # Config MAX_LEN= 128 TRAIN_BATCH_SIZE = 32 VALID_BTCH_SIZE = 8 EPOCHS = 10 BERT_MODEL = 'bert-base-uncased' MODEL_PATH = "model.bin" TRAINING_FILE = "../input/entity-annotated-corpus/ner_dataset.csv" TOKENIZER = BertTokenizer.from_pretrained(BERT_MODEL, do_lower_case=True) ``` ```python # Create the padded input, attention masks, token type and labels def get_train_data(text, tags): tokenized_text = [] target_tags = [] for index, token in enumerate(text): encoded_token = TOKENIZER.encode( token, add_special_tokens = False ) encoded_token_len = len(encoded_token) tokenized_text.extend(encoded_token) target_tags.extend([tags[index]] * encoded_token_len) #truncation tokenized_text = tokenized_text[: MAX_LEN - 2] target_tags = target_tags[: MAX_LEN - 2] #[101] = [CLS] , [102] = [SEP] tokenized_text = [101] + tokenized_text + [102] target_tags = [0] + target_tags + [0] attention_mask = [1] * len(tokenized_text) token_type_ids = [0] * len(tokenized_text) #padding padding_len = int(MAX_LEN - len(tokenized_text)) tokenized_text = tokenized_text + ([0] * padding_len) target_tags = target_tags + ([0] * padding_len) attention_mask = attention_mask + ([0] * padding_len) token_type_ids = token_type_ids + ([0] * padding_len) return (tokenized_text, target_tags, attention_mask, token_type_ids) ``` ```python # Extract sentences from dataset class RetrieveSentence(object): def __init__(self, data): self.n_sent = 1 self.data = data self.empty = False function = lambda s: [(w, p, t) for w, p, t in zip(s["Word"].values.tolist(), s["POS"].values.tolist(), s["Tag"].values.tolist())] self.grouped = self.data.groupby("Sentence #").apply(function) self.sentences = [s for s in self.grouped] def retrieve(self): try: s = self.grouped["Sentence: {}".format(self.n_sent)] self.n_sent += 1 return s except: return None ``` ```python # Load dataset and create one hot encoding for labels df_data = pd.read_csv(TRAINING_FILE,sep=",",encoding="latin1").fillna(method='ffill') Sentences = RetrieveSentence(df_data) sentences_list = [" ".join([s[0] for s in sent]) for sent in Sentences.sentences] labels = [ [s[2] for s in sent] for sent in Sentences.sentences] tags_2_val = list(set(df_data["Tag"])) tag_2_idx = {t: i for i, t in enumerate(tags_2_val)} id_labels = [[tag_2_idx.get(l) for l in lab] for lab in labels] sentences_list = [sent.split() for sent in sentences_list] # I removed the sentence n 41770 because it gave index problems del labels[41770] del sentences_list[41770] del id_labels[41770] ``` ```python encoded_text = [] encoded_labels = [] attention_masks = [] token_type_ids = [] for i in range(len(sentences_list)): text, labels, att_mask, tok_type = get_train_data(text = sentences_list[i], tags = id_labels[i]) encoded_text.append(text) encoded_labels.append(labels) attention_masks.append(att_mask) token_type_ids.append(tok_type) ``` ```python # Convert from list to np array encoded_text = np.array(encoded_text) encoded_labels = np.array(encoded_labels) attention_masks = np.array(attention_masks) token_type_ids = np.array(token_type_ids) ``` ```python # Train Test split X_train, X_valid, Y_train, Y_valid = train_test_split(encoded_text, encoded_labels, random_state=20, test_size=0.1) Mask_train, Mask_valid, Token_ids_train, Token_ids_valid = train_test_split(attention_masks,token_type_ids ,random_state=20, test_size=0.1) ``` ```python # Aggregate the train and test set, then shuffle and batch the train set def example_to_features(input_ids,attention_masks,token_type_ids,y): return {"input_ids": input_ids, "attention_mask": attention_masks, "token_type_ids": token_type_ids},y train_ds = tf.data.Dataset.from_tensor_slices((X_train,Mask_train,Token_ids_train,Y_train)).map(example_to_features).shuffle(1000).batch(32) test_ds=tf.data.Dataset.from_tensor_slices((X_valid,Mask_valid,Token_ids_valid,Y_valid)).map(example_to_features).batch(1) ``` ```python # Load TFBertForTokenClassification with default config config = BertConfig.from_pretrained(BERT_MODEL,num_labels=len(tags_2_val)) model = TFBertForTokenClassification.from_pretrained(BERT_MODEL, from_pt=bool(".bin" in BERT_MODEL), config=config) ``` ```python # Add softmax layer, compute loss, optimizer and fit model.layers[-1].activation = tf.keras.activations.softmax model.summary() optimizer = tf.keras.optimizers.Adam() loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) history = model.fit(train_ds, epochs=3, validation_data=test_ds) ``` ```python # Prediction. Spoiler: the label predicted are O-Label sentence = "Hi , my name is Bob and I live in England" inputs = TOKENIZER(sentence, return_tensors="tf") input_ids = inputs["input_ids"] inputs["labels"] = tf.reshape(tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))) # Batch size 1 output = model(inputs) ``` The code is executed on a Kaggle notebook. The transformer library version is 3.4.0 I include also a ipynb file which shows the output. [BERT_NERT_Tensorflow.zip](https://github.com/huggingface/transformers/files/5525652/BERT_NERT_Tensorflow.zip) Many thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8471/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8470
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8470/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8470/comments
https://api.github.com/repos/huggingface/transformers/issues/8470/events
https://github.com/huggingface/transformers/pull/8470
740,902,522
MDExOlB1bGxSZXF1ZXN0NTE5MzEyMjM0
8,470
Add pretraining loss computation for TF Bert pretraining
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR adds the loss computation for the pretraining TF BERT model. The loss computation test is also more robust on variable call signature lengths.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8470/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8470", "html_url": "https://github.com/huggingface/transformers/pull/8470", "diff_url": "https://github.com/huggingface/transformers/pull/8470.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8470.patch", "merged_at": 1605208107000 }
https://api.github.com/repos/huggingface/transformers/issues/8469
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8469/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8469/comments
https://api.github.com/repos/huggingface/transformers/issues/8469/events
https://github.com/huggingface/transformers/issues/8469
740,896,538
MDU6SXNzdWU3NDA4OTY1Mzg=
8,469
Pegasus models load very slowly or do not load at all on initial execution of from_pretrained() when Python is spawned from within a Node.js process
{ "login": "wehriam", "id": 81482, "node_id": "MDQ6VXNlcjgxNDgy", "avatar_url": "https://avatars.githubusercontent.com/u/81482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wehriam", "html_url": "https://github.com/wehriam", "followers_url": "https://api.github.com/users/wehriam/followers", "following_url": "https://api.github.com/users/wehriam/following{/other_user}", "gists_url": "https://api.github.com/users/wehriam/gists{/gist_id}", "starred_url": "https://api.github.com/users/wehriam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wehriam/subscriptions", "organizations_url": "https://api.github.com/users/wehriam/orgs", "repos_url": "https://api.github.com/users/wehriam/repos", "events_url": "https://api.github.com/users/wehriam/events{/privacy}", "received_events_url": "https://api.github.com/users/wehriam/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "maybe @julien-c or @Pierrci can answer this question better", "This seems very environment-specific. @wehriam in case paid support is an option, we have a technical support program in beta at [email protected] (cc @clmnt @jeffboudier)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
## Description We are using Node.js to coordinate processes, including a Python web server that loads a pretrained model in CPU mode. The model files are downloaded and cached prior to execution during a container build process. On initial execution, it appears the model stalls while loading the primary weights file from the local cache. Subsequent executions do not stall at this point and load successfully. Are there any threading caveats we should be aware of when using a coordinating process? Issue #7516 noticed problems with Celery outside of single-pool mode. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Docker, RHEL8 UBI Base Image - Python version: Python 3.8.0 (default, Mar 9 2020, 18:02:46), [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux - PyTorch version (GPU?): 1.7.0+cpu, no GPU - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No - System: AWS t3.large image running Docker ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> Pegasus: @patrickvonplaten documentation: @sgugger ### Thank you! Thank you to the Hugging Face team and contributors to this project. The barriers to entry on complex NLP tasks have been lowered substantially, and it's been a huge amount of fun exploring with these models. ## Information Model I am using: tuner007/pegasus_paraphrase The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Spawn a Python webserver that loads a Pegasus model using `from_pretrained(...)` 2. The initial load takes upwards of 300 seconds or fails 3. Subsequent loads take 60 seconds but succeed <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Load time of Pegasus models should be consistent. <!-- A clear and concise description of what you would expect to happen. --> Docker runs in privileged mode with full access to host memory and network. ## Source Code and Logs ```bash sudo docker run -d \ -u 0:0 \ --ipc=host \ --privileged \ --network host \ --restart always \ --log-driver json-file \ --log-opt max-size=100m \ --log-opt max-file=5 \ -v /root/.example:/root/.example \ --name example \ example:1.0 ``` Node.js management script spawns a detached Python process. ```js // @flow const uuid = require('uuid'); const { spawn } = require('child_process'); const superagent = require('superagent'); const makeLogger = require('../logger'); const killProcess = require('../lib/kill-process'); const commandExists = require('command-exists'); const pythonWebserverExistsPromise = new Promise((resolve, reject) => { commandExists('python3.8', (error:Error, exists:boolean) => { if (error) { reject(error); } else { resolve(exists); } }); }); class PythonWebserverDoesNotExistError extends Error {} module.exports.PythonWebserverDoesNotExistError = PythonWebserverDoesNotExistError; module.exports.startPythonWebserver = async (port:number, path:string) => { const logger = makeLogger(`Python Webserver ${path}`); const exists = await pythonWebserverExistsPromise; if (!exists) { logger.error('python3.8 does not exist on path'); throw new PythonWebserverDoesNotExistError('python3.8 does not exist on path'); } const pythonWebserverArgs = [path, `${port}`]; let isManuallyClosed = false; let mainProcess; let pid; let isReadyPromise = Promise.resolve(); const spawnPythonWebserver = () => { logger.info(`Spawning ${path}`); mainProcess = spawn('python3.8', pythonWebserverArgs, { windowsHide: true, detached: true, shell: true, env: Object.assign({}, {}, process.env) }); pid = mainProcess.pid; if (!pid) { logger.error('Process did not spawn with PID'); try { mainProcess.kill(); } catch (error) { logger.error('Unable to kill process'); logger.errorStack(error); } throw new Error('Python webserver process did not spawn'); } let isClosed = false; isManuallyClosed = false; mainProcess.stdout.on('data', (data) => { data.toString('utf8').trim().split('\n').forEach((line) => logger.info(line)); }); mainProcess.stderr.on('data', (data) => { data.toString('utf8').trim().split('\n').forEach((line) => logger.error(line)); }); mainProcess.on('error', (error) => { logger.errorStack(error); }); mainProcess.on('close', (code:number) => { if (code === 0 || code === null) { logger.info('Process closed'); } else { logger.error(`Failed with exit code ${code}`); } isClosed = true; if (!isManuallyClosed) { spawnPythonWebserver(); } }); logger.info('Listening'); isReadyPromise = (async () => { let lastError; const connectionTimeout = Date.now() + 300000; const start = Date.now(); while (true) { if (isManuallyClosed && isClosed) { return; } if (isClosed || Date.now() > connectionTimeout) { await close(); logger.error(`Unable to conect to ${path} at http://127.0.0.1:${port} after 300 seconds`); if (lastError) { logger.errorStack(lastError); throw new Error(`Unable to conect to python webserver ${path} at http://127.0.0.1:${port} after 300 seconds: ${lastError.message}`); } else { throw new Error(`Unable to conect to python webserver ${path} at http://127.0.0.1:${port} after 300 seconds`); } } try { await superagent.get(`http://127.0.0.1:${port}/${uuid.v4()}`).timeout({ response: 3000, deadline: 3000 }); break; } catch (error) { if (error && error.response && error.response.statusCode === 404) { break; } else { lastError = error; } } await new Promise((resolve) => setTimeout(resolve, 1000)); } logger.info(`Connected to ${path} at http://127.0.0.1:${port} after ${Math.round(10 * (Date.now() - start) / 1000) / 10} seconds`); })(); }; spawnPythonWebserver(); const close = async () => { isManuallyClosed = true; logger.info('Shutting down'); if (pid) { await killProcess(pid, 'python webserver'); } else { logger.warn('PID not found'); } logger.info('Shut down'); }; const checkIsReady = () => isReadyPromise; return [close, checkIsReady]; }; ``` Python uses a Tornado web server to host an inference API. ```python import tornado.ioloop import tornado.web import sys import gc import json import numpy as np from os.path import expanduser from transformers import PegasusForConditionalGeneration, PegasusTokenizer, logging import torch from multiprocessing import cpu_count executor = tornado.concurrent.futures.ThreadPoolExecutor(max_workers=2) def chunks(lst, n): """Yield successive n-sized chunks from lst.""" for i in range(0, len(lst), n): yield lst[i:i + n] class NumpyEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, np.ndarray): return obj.tolist() return json.JSONEncoder.default(self, obj) @torch.no_grad() def generate(tokenizer, model, sentences): batch = tokenizer.prepare_seq2seq_batch(sentences, truncation=True, padding='longest', max_length=60) translated = model.generate(**batch, num_beams=3, repetition_penalty=2.0, length_penalty=0.4, do_sample=True, temperature=0.8) return tokenizer.batch_decode(translated, skip_special_tokens=True) class Summarization(tornado.web.RequestHandler): def initialize(self, model, tokenizer): self.model = model self.tokenizer = tokenizer def set_default_headers(self): self.set_header('Content-Type', 'application/json') @tornado.gen.coroutine def post(self): body = json.loads(self.request.body.decode()) if not isinstance(body, list): raise web.HTTPError(400, 'Invalid request body. A valid JSON document containing an array of strings is required.') for sentence in body: if not isinstance(sentence, str): raise web.HTTPError(400, 'Invalid request body. A valid JSON document containing an array of strings is required.') result = [] sentence_groups = list(chunks(body, 20)) for sentence_group in sentence_groups: decoded = yield executor.submit(generate, self.tokenizer, self.model, sentence_group) result = result + decoded gc.collect() self.finish(json.dumps(result, cls=NumpyEncoder)) def make_app(model, tokenizer): return tornado.web.Application([ (r"/summarization", Summarization, dict(model=model, tokenizer=tokenizer)) ]) if __name__ == "__main__": torch.set_grad_enabled(False) logging.set_verbosity_debug() torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' model = PegasusForConditionalGeneration.from_pretrained('tuner007/pegasus_paraphrase').to(torch_device) tokenizer = PegasusTokenizer.from_pretrained('tuner007/pegasus_paraphrase') model.share_memory() app = make_app(model, tokenizer) app.listen(sys.argv[1]) gc.collect() tornado.ioloop.IOLoop.current().start() ``` Debug output on initial execution, model fails to load after 300s. ``` Python Webserver /example/python/nlp-tornado-summarization.py - error - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/config.json from cache at /root/.cache/torch/transformers/6aa2f0999c84ce856faa292c839572741e6591fae603fe7245e31c2420c621b1.2e89bfaa32f367525ed659d47352d25c26f87f779656ee16db23d056fe7cfc78 Python Webserver /example/python/nlp-tornado-summarization.py - error - Model config PegasusConfig { Python Webserver /example/python/nlp-tornado-summarization.py - error - "activation_dropout": 0.1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "activation_function": "relu", Python Webserver /example/python/nlp-tornado-summarization.py - error - "add_bias_logits": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "add_final_layer_norm": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "architectures": [ Python Webserver /example/python/nlp-tornado-summarization.py - error - "PegasusForConditionalGeneration" Python Webserver /example/python/nlp-tornado-summarization.py - error - ], Python Webserver /example/python/nlp-tornado-summarization.py - error - "attention_dropout": 0.1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "bos_token_id": 0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "classif_dropout": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "classifier_dropout": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "d_model": 1024, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_attention_heads": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_ffn_dim": 4096, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_layerdrop": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_layers": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "do_blenderbot_90_layernorm": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "dropout": 0.1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_attention_heads": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_ffn_dim": 4096, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_layerdrop": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_layers": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "eos_token_id": 1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "extra_pos_embeddings": 1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "force_bos_token_to_be_generated": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "id2label": { Python Webserver /example/python/nlp-tornado-summarization.py - error - "0": "LABEL_0", Python Webserver /example/python/nlp-tornado-summarization.py - error - "1": "LABEL_1", Python Webserver /example/python/nlp-tornado-summarization.py - error - "2": "LABEL_2" Python Webserver /example/python/nlp-tornado-summarization.py - error - }, Python Webserver /example/python/nlp-tornado-summarization.py - error - "init_std": 0.02, Python Webserver /example/python/nlp-tornado-summarization.py - error - "is_encoder_decoder": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "label2id": { Python Webserver /example/python/nlp-tornado-summarization.py - error - "LABEL_0": 0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "LABEL_1": 1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "LABEL_2": 2 Python Webserver /example/python/nlp-tornado-summarization.py - error - }, Python Webserver /example/python/nlp-tornado-summarization.py - error - "length_penalty": 0.8, Python Webserver /example/python/nlp-tornado-summarization.py - error - "max_length": 60, Python Webserver /example/python/nlp-tornado-summarization.py - error - "max_position_embeddings": 60, Python Webserver /example/python/nlp-tornado-summarization.py - error - "model_type": "pegasus", Python Webserver /example/python/nlp-tornado-summarization.py - error - "normalize_before": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "normalize_embedding": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "num_beams": 8, Python Webserver /example/python/nlp-tornado-summarization.py - error - "num_hidden_layers": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "pad_token_id": 0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "scale_embedding": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "static_position_embeddings": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "vocab_size": 96103 Python Webserver /example/python/nlp-tornado-summarization.py - error - } Python Webserver /example/python/nlp-tornado-summarization.py - error - loading weights file https://cdn.huggingface.co/tuner007/pegasus_paraphrase/pytorch_model.bin from cache at /root/.cache/torch/transformers/387ce6aee5feafa70429f4659a02b7433a17ea8b0a6c5cad24e894cc46c7b88e.37d8caa66cfa802d672246ab9f2f72b886c1a58ac1ba12892a05c17d8b0d421f Python Webserver /example/python/nlp-tornado-summarization.py - info - Shutting down Process Killer - info - Sending SIGTERM to python webserver process 38 Python Webserver /example/python/nlp-tornado-summarization.py - info - Process closed Process Killer - info - Stopped python webserver process 38 with SIGTERM Python Webserver /example/python/nlp-tornado-summarization.py - info - Shut down Python Webserver /example/python/nlp-tornado-summarization.py - error - Unable to conect to /example/python/nlp-tornado-summarization.py at http://127.0.0.1:43777 after 300 seconds Python Webserver /example/python/nlp-tornado-summarization.py - error - Error: connect ECONNREFUSED 127.0.0.1:43777 Python Webserver /example/python/nlp-tornado-summarization.py - error - at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1145:16) Python Webserver /example/python/nlp-tornado-summarization.py - error - errno: "ECONNREFUSED" Python Webserver /example/python/nlp-tornado-summarization.py - error - code: "ECONNREFUSED" Python Webserver /example/python/nlp-tornado-summarization.py - error - syscall: "connect" Python Webserver /example/python/nlp-tornado-summarization.py - error - address: "127.0.0.1" Python Webserver /example/python/nlp-tornado-summarization.py - error - port: 43777 Python Webserver /example/python/nlp-tornado-summarization.py - error - response: undefined ``` Debug output on second execution, model loads after 60s. ``` Python Webserver /example/python/nlp-tornado-summarization.py - info - Spawning /example/python/nlp-tornado-summarization.py Python Webserver /example/python/nlp-tornado-summarization.py - info - Listening Python Webserver /example/python/nlp-tornado-summarization.py - error - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/config.json from cache at /root/.cache/torch/transformers/6aa2f0999c84ce856faa292c839572741e6591fae603fe7245e31c2420c621b1.2e89bfaa32f367525ed659d47352d25c26f87f779656ee16db23d056fe7cfc78 Python Webserver /example/python/nlp-tornado-summarization.py - error - Model config PegasusConfig { Python Webserver /example/python/nlp-tornado-summarization.py - error - "activation_dropout": 0.1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "activation_function": "relu", Python Webserver /example/python/nlp-tornado-summarization.py - error - "add_bias_logits": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "add_final_layer_norm": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "architectures": [ Python Webserver /example/python/nlp-tornado-summarization.py - error - "PegasusForConditionalGeneration" Python Webserver /example/python/nlp-tornado-summarization.py - error - ], Python Webserver /example/python/nlp-tornado-summarization.py - error - "attention_dropout": 0.1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "bos_token_id": 0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "classif_dropout": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "classifier_dropout": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "d_model": 1024, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_attention_heads": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_ffn_dim": 4096, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_layerdrop": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "decoder_layers": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "do_blenderbot_90_layernorm": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "dropout": 0.1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_attention_heads": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_ffn_dim": 4096, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_layerdrop": 0.0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "encoder_layers": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "eos_token_id": 1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "extra_pos_embeddings": 1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "force_bos_token_to_be_generated": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "id2label": { Python Webserver /example/python/nlp-tornado-summarization.py - error - "0": "LABEL_0", Python Webserver /example/python/nlp-tornado-summarization.py - error - "1": "LABEL_1", Python Webserver /example/python/nlp-tornado-summarization.py - error - "2": "LABEL_2" Python Webserver /example/python/nlp-tornado-summarization.py - error - }, Python Webserver /example/python/nlp-tornado-summarization.py - error - "init_std": 0.02, Python Webserver /example/python/nlp-tornado-summarization.py - error - "is_encoder_decoder": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "label2id": { Python Webserver /example/python/nlp-tornado-summarization.py - error - "LABEL_0": 0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "LABEL_1": 1, Python Webserver /example/python/nlp-tornado-summarization.py - error - "LABEL_2": 2 Python Webserver /example/python/nlp-tornado-summarization.py - error - }, Python Webserver /example/python/nlp-tornado-summarization.py - error - "length_penalty": 0.8, Python Webserver /example/python/nlp-tornado-summarization.py - error - "max_length": 60, Python Webserver /example/python/nlp-tornado-summarization.py - error - "max_position_embeddings": 60, Python Webserver /example/python/nlp-tornado-summarization.py - error - "model_type": "pegasus", Python Webserver /example/python/nlp-tornado-summarization.py - error - "normalize_before": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "normalize_embedding": false, Python Webserver /example/python/nlp-tornado-summarization.py - error - "num_beams": 8, Python Webserver /example/python/nlp-tornado-summarization.py - error - "num_hidden_layers": 16, Python Webserver /example/python/nlp-tornado-summarization.py - error - "pad_token_id": 0, Python Webserver /example/python/nlp-tornado-summarization.py - error - "scale_embedding": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "static_position_embeddings": true, Python Webserver /example/python/nlp-tornado-summarization.py - error - "vocab_size": 96103 Python Webserver /example/python/nlp-tornado-summarization.py - error - } Python Webserver /example/python/nlp-tornado-summarization.py - error - loading weights file https://cdn.huggingface.co/tuner007/pegasus_paraphrase/pytorch_model.bin from cache at /root/.cache/torch/transformers/387ce6aee5feafa70429f4659a02b7433a17ea8b0a6c5cad24e894cc46c7b88e.37d8caa66cfa802d672246ab9f2f72b886c1a58ac1ba12892a05c17d8b0d421f Python Webserver /example/python/nlp-tornado-summarization.py - error - All model checkpoint weights were used when initializing PegasusForConditionalGeneration. Python Webserver /example/python/nlp-tornado-summarization.py - error - All the weights of PegasusForConditionalGeneration were initialized from the model checkpoint at tuner007/pegasus_paraphrase. Python Webserver /example/python/nlp-tornado-summarization.py - error - If your task is similar to the task the model of the checkpoint was trained on, you can already use PegasusForConditionalGeneration for predictions without further training. Python Webserver /example/python/nlp-tornado-summarization.py - error - Model name 'tuner007/pegasus_paraphrase' not found in model shortcut name list (google/pegasus-xsum). Assuming 'tuner007/pegasus_paraphrase' is a path, a model identifier, or url to a directory containing tokenizer files. Python Webserver /example/python/nlp-tornado-summarization.py - error - loading file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/spiece.model from cache at /root/.cache/torch/transformers/fa4532c0035b101d7abcd5c0c9c34a83288902b66c5616034db1a47643e05c75.efce77b8dcd2c57b109b0d10170fcdcd53f23c21286974d4f66706536758ab6e Python Webserver /example/python/nlp-tornado-summarization.py - error - loading file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/added_tokens.json from cache at None Python Webserver /example/python/nlp-tornado-summarization.py - error - loading file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/special_tokens_map.json from cache at /root/.cache/torch/transformers/87ea1eeb171e0c2b3d4a7c9dbef4cb9aa4a7251e3673777ff8b756af93bb1e65.d142dfa55f201f5033fe9ee40eb8fe1ca965dcb0f38b175386020492986d507f Python Webserver /example/python/nlp-tornado-summarization.py - error - loading file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/tokenizer_config.json from cache at /root/.cache/torch/transformers/9ee22427dfb233033bc52ded6b335bbd3dd17b3698f3349e8aecb3c0ec0a99aa.1598fab009ce003f8802a6055c13134aa3be28abc2cca8db6a881bdc1ef0164e Python Webserver /example/python/nlp-tornado-summarization.py - error - loading file https://s3.amazonaws.com/models.huggingface.co/bert/tuner007/pegasus_paraphrase/tokenizer.json from cache at None Python Webserver /example/python/nlp-tornado-summarization.py - info - Connected to /example/python/nlp-tornado-summarization.py at http://127.0.0.1:43777 after 282.4 seconds Python Webserver /example/python/nlp-tornado-encoding.py - info - Spawning /example/python/nlp-tornado-encoding.py Python Webserver /example/python/nlp-tornado-encoding.py - info - Listening Python Webserver /example/python/nlp-tornado-summarization.py - error - WARNING:tornado.access:404 GET /458e8657-7e9f-468c-9b87-4d7383e42df8 (127.0.0.1) 0.54ms Python Webserver /example/python/nlp-tornado-encoding.py - info - Connected to /example/python/nlp-tornado-encoding.py at http://127.0.0.1:43778 after 65.1 seconds Python Webserver /example/python/nlp-tornado-encoding.py - error - WARNING:tornado.access:404 GET /77f8c1da-9c3d-4ef0-934a-b95f434349f6 (127.0.0.1) 0.54ms ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8469/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8469/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8468
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8468/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8468/comments
https://api.github.com/repos/huggingface/transformers/issues/8468/events
https://github.com/huggingface/transformers/pull/8468
740,844,418
MDExOlB1bGxSZXF1ZXN0NTE5MjYzNDMw
8,468
Example NER script predicts on tokenized dataset
{ "login": "sarnoult", "id": 31313050, "node_id": "MDQ6VXNlcjMxMzEzMDUw", "avatar_url": "https://avatars.githubusercontent.com/u/31313050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarnoult", "html_url": "https://github.com/sarnoult", "followers_url": "https://api.github.com/users/sarnoult/followers", "following_url": "https://api.github.com/users/sarnoult/following{/other_user}", "gists_url": "https://api.github.com/users/sarnoult/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarnoult/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarnoult/subscriptions", "organizations_url": "https://api.github.com/users/sarnoult/orgs", "repos_url": "https://api.github.com/users/sarnoult/repos", "events_url": "https://api.github.com/users/sarnoult/events{/privacy}", "received_events_url": "https://api.github.com/users/sarnoult/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
The new run_ner.py script (relying on datasets) tries to run prediction on the input test set `datasets["test"]`, but it should really input the tokenized set `tokenized_datasets["test"]` # What does this PR do? Fix an error with run_ner.py at the prediction step on a custom dataset <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8468/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8468", "html_url": "https://github.com/huggingface/transformers/pull/8468", "diff_url": "https://github.com/huggingface/transformers/pull/8468.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8468.patch", "merged_at": 1605108504000 }
https://api.github.com/repos/huggingface/transformers/issues/8467
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8467/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8467/comments
https://api.github.com/repos/huggingface/transformers/issues/8467/events
https://github.com/huggingface/transformers/issues/8467
740,837,112
MDU6SXNzdWU3NDA4MzcxMTI=
8,467
Fine tuning a classification model with engineered features
{ "login": "rogeriobromfman", "id": 23483574, "node_id": "MDQ6VXNlcjIzNDgzNTc0", "avatar_url": "https://avatars.githubusercontent.com/u/23483574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rogeriobromfman", "html_url": "https://github.com/rogeriobromfman", "followers_url": "https://api.github.com/users/rogeriobromfman/followers", "following_url": "https://api.github.com/users/rogeriobromfman/following{/other_user}", "gists_url": "https://api.github.com/users/rogeriobromfman/gists{/gist_id}", "starred_url": "https://api.github.com/users/rogeriobromfman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rogeriobromfman/subscriptions", "organizations_url": "https://api.github.com/users/rogeriobromfman/orgs", "repos_url": "https://api.github.com/users/rogeriobromfman/repos", "events_url": "https://api.github.com/users/rogeriobromfman/events{/privacy}", "received_events_url": "https://api.github.com/users/rogeriobromfman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,605
1,605
1,605
NONE
null
# 🚀 Feature request When fine-tuning a BERT model for text classification, it would be useful to be able to add some engineered features to improve accuracy. ## Motivation For example: - if there are any dates on the text (based on NER); - if the text starts with a punctuation; - if the font size is larger or smaller than the rest of the document, etc. These things can help the model make better predictions as to the class of the text. ## Contribution At the moment, I've been adding these features as custom tokens to the end of the text. E.g "\<DATES\>", "\<PUNCT\>"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8467/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8466
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8466/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8466/comments
https://api.github.com/repos/huggingface/transformers/issues/8466/events
https://github.com/huggingface/transformers/pull/8466
740,809,139
MDExOlB1bGxSZXF1ZXN0NTE5MjMzODU2
8,466
Fix TF next sentence output
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? Make the loss optional in the TF next sentence prediction output.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8466/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8466", "html_url": "https://github.com/huggingface/transformers/pull/8466", "diff_url": "https://github.com/huggingface/transformers/pull/8466.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8466.patch", "merged_at": 1605105699000 }
https://api.github.com/repos/huggingface/transformers/issues/8465
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8465/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8465/comments
https://api.github.com/repos/huggingface/transformers/issues/8465/events
https://github.com/huggingface/transformers/issues/8465
740,798,832
MDU6SXNzdWU3NDA3OTg4MzI=
8,465
Pytorch Vs Onnx: Pytorch is faster and provides different output
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I also got different results 🤦‍", "I have figured out the problem, but I don't have the solution.\r\nWhen you use a single sample per batch it works correctly, but when you use more than one sample per batch, the results are totally different.", "@mfuntowicz and @LysandreJik , it will be great if you could show us an example for how to correctly use batch processing for onnx inference.", "Hey @agemagician! I have exactly the same problem, great inference time with onnx when batch size = 1 but when batch size is increased, raw pytorch wins over onnx. Did you find any solution for the issue?", "unfortunately, not.", "That's unfortunate..", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,605
1,614
1,614
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik Onnx: @mfuntowicz ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: https://colab.research.google.com/drive/1UwgWgUF4k_GPJ5TcziHo4eH_rRFQeNVL?usp=sharing ## Expected behavior I have followed the the onnx export tutorial: https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb However, I have found 2 issues: 1. Pytorch is faster than Onnx. 2. Onnx produce different embedding output than Pytorch. Could anyone help me to figure out the issue ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8465/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/8465/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8464
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8464/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8464/comments
https://api.github.com/repos/huggingface/transformers/issues/8464/events
https://github.com/huggingface/transformers/pull/8464
740,792,864
MDExOlB1bGxSZXF1ZXN0NTE5MjIwMjUz
8,464
Add model card for ai4bharat/indic-bert
{ "login": "divkakwani", "id": 2513455, "node_id": "MDQ6VXNlcjI1MTM0NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2513455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/divkakwani", "html_url": "https://github.com/divkakwani", "followers_url": "https://api.github.com/users/divkakwani/followers", "following_url": "https://api.github.com/users/divkakwani/following{/other_user}", "gists_url": "https://api.github.com/users/divkakwani/gists{/gist_id}", "starred_url": "https://api.github.com/users/divkakwani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/divkakwani/subscriptions", "organizations_url": "https://api.github.com/users/divkakwani/orgs", "repos_url": "https://api.github.com/users/divkakwani/repos", "events_url": "https://api.github.com/users/divkakwani/events{/privacy}", "received_events_url": "https://api.github.com/users/divkakwani/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "really cool, thanks for sharing" ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR adds model card for IndicBERT model (shortcut name: `ai4bharat/indic-bert`) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Also: @julien-c (model cards)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8464/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8464", "html_url": "https://github.com/huggingface/transformers/pull/8464", "diff_url": "https://github.com/huggingface/transformers/pull/8464.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8464.patch", "merged_at": 1605724130000 }
https://api.github.com/repos/huggingface/transformers/issues/8463
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8463/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8463/comments
https://api.github.com/repos/huggingface/transformers/issues/8463/events
https://github.com/huggingface/transformers/pull/8463
740,770,679
MDExOlB1bGxSZXF1ZXN0NTE5MjAxNjU3
8,463
Better regex expression for extracting language code in tokenization_marian.py
{ "login": "soumyac1999", "id": 33203398, "node_id": "MDQ6VXNlcjMzMjAzMzk4", "avatar_url": "https://avatars.githubusercontent.com/u/33203398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soumyac1999", "html_url": "https://github.com/soumyac1999", "followers_url": "https://api.github.com/users/soumyac1999/followers", "following_url": "https://api.github.com/users/soumyac1999/following{/other_user}", "gists_url": "https://api.github.com/users/soumyac1999/gists{/gist_id}", "starred_url": "https://api.github.com/users/soumyac1999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soumyac1999/subscriptions", "organizations_url": "https://api.github.com/users/soumyac1999/orgs", "repos_url": "https://api.github.com/users/soumyac1999/repos", "events_url": "https://api.github.com/users/soumyac1999/events{/privacy}", "received_events_url": "https://api.github.com/users/soumyac1999/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @patrickvonplaten ", "No, 2 character language codes are not covered. However `>>.{2}<<|>>.{3}<<|>>.{3}\\_.{4}<<` can be used.\r\n\r\nI can try writing tests. Is these somewhere where I could find an exhaustive set of language codes? I had written the regex to cover the 194 language code in `'Helsinki-NLP/opus-mt-en-mul'`.\r\n\r\n```python\r\ndef test_language_codes(self):\r\n tok = MarianTokenizer.from_pretrained(f\"{ORG_NAME}opus-mt-en-mul\")\r\n batch = tok.prepare_seq2seq_batch([\">>hin<< I am a small frog\", \">>zlm_Latn<< I am a small frog\", \">>fr<< I am a small frog\"], return_tensors=FRAMEWORK)\r\n \r\n expected = [[[888, 21, 437, 9, 2613, 37565, 0], [770, 21, 437, 9, 2613, 37565, 0], [1, 21, 437, 9, 2613, 37565, 0]]\r\n for i in range(3):\r\n self.assertListEqual(expected[i], batch.input_ids[i])\r\n```\r\n\r\nThe above will need some changes since `>>fr<<` is not in the vocabulary and the sentences also do not have another `<<` in them (maybe a different test for that?).", "> Is there somewhere where I could find an exhaustive set of language codes?\r\n\r\nNot easily, you can find the supported codes for any model by looking for `tgt_languages` or `tgt_constituents` on a model card e.g. https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE or https://huggingface.co/Helsinki-NLP/opus-mt-en-roa?text=%3E%3Efra%3C%3C+My+name+is+Sarah+and+I+live+in+London\r\n\r\n\r\nHere are some more:\r\n\r\n```python\r\ntgt_constituents= {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}\r\ntgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}\r\n```\r\n\r\n\r\nI think we should accept any number of alphabet characters or '_' inside the `>>` or at least up to 8.\r\n\r\n\r\n\r\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,605
1,614
1,614
NONE
null
# Better regex expression for extracting language code in tokenization_marian.py In `tokenization_marian.py`, the regex `>>.+<<` is used to extract the language token from the sentences leading to the following incorrect tokenization. ``` Example Sentence: >>hin<< We use cout<< function to print a line in C++. Current Tokenizer gives: ['<unk>', '▁function', '▁to', '▁print', '▁a', '▁line', '▁in', '▁C', '++', '.', '</s>'] Expected Tokenization: ['>>hin<<', '▁We', '▁use', '▁c', 'out', '<', '<', '▁function', '▁to', '▁print', '▁a', '▁line', '▁in', '▁C', '++', '.', '</s>'] ``` This pull request changes the regex to `>>.{3}<<|>>.{3}\_.{4}<<` which covers the 194 language tags in the en-mul model. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8463", "html_url": "https://github.com/huggingface/transformers/pull/8463", "diff_url": "https://github.com/huggingface/transformers/pull/8463.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8463.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8462
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8462/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8462/comments
https://api.github.com/repos/huggingface/transformers/issues/8462/events
https://github.com/huggingface/transformers/pull/8462
740,748,367
MDExOlB1bGxSZXF1ZXN0NTE5MTgyODcz
8,462
Add next sentence prediction loss computation
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR adds the loss computation for the next sentence prediction task in TF BERT and MobileBERT.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8462/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8462", "html_url": "https://github.com/huggingface/transformers/pull/8462", "diff_url": "https://github.com/huggingface/transformers/pull/8462.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8462.patch", "merged_at": 1605103327000 }
https://api.github.com/repos/huggingface/transformers/issues/8461
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8461/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8461/comments
https://api.github.com/repos/huggingface/transformers/issues/8461/events
https://github.com/huggingface/transformers/issues/8461
740,709,364
MDU6SXNzdWU3NDA3MDkzNjQ=
8,461
multiple hard-coded paths in transformers/file_utils.py
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "will report this in datasets repo. " ]
1,605
1,605
1,605
NONE
null
Hi I need to run the codes on a machine not having access to internet, I am running finetune_trainer.py on a dataset from datasets repo, due to these hardcoded paths I cannot get the code running without access to internet, could you please make any hard-coded path a parameter, so if user do not have access to internet, they could download the data and set the paths? here is the full path to the file I am mentioning /idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py thanks. Best Rabeeh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8461/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8460
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8460/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8460/comments
https://api.github.com/repos/huggingface/transformers/issues/8460/events
https://github.com/huggingface/transformers/pull/8460
740,704,981
MDExOlB1bGxSZXF1ZXN0NTE5MTQ2NzIy
8,460
Fix TF Longformer
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? Fix TF Longformer model outputs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8460", "html_url": "https://github.com/huggingface/transformers/pull/8460", "diff_url": "https://github.com/huggingface/transformers/pull/8460.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8460.patch", "merged_at": 1605095655000 }
https://api.github.com/repos/huggingface/transformers/issues/8459
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8459/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8459/comments
https://api.github.com/repos/huggingface/transformers/issues/8459/events
https://github.com/huggingface/transformers/issues/8459
740,679,846
MDU6SXNzdWU3NDA2Nzk4NDY=
8,459
Question Answering Documentation Example Bug
{ "login": "iremnasir", "id": 51764807, "node_id": "MDQ6VXNlcjUxNzY0ODA3", "avatar_url": "https://avatars.githubusercontent.com/u/51764807?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iremnasir", "html_url": "https://github.com/iremnasir", "followers_url": "https://api.github.com/users/iremnasir/followers", "following_url": "https://api.github.com/users/iremnasir/following{/other_user}", "gists_url": "https://api.github.com/users/iremnasir/gists{/gist_id}", "starred_url": "https://api.github.com/users/iremnasir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iremnasir/subscriptions", "organizations_url": "https://api.github.com/users/iremnasir/orgs", "repos_url": "https://api.github.com/users/iremnasir/repos", "events_url": "https://api.github.com/users/iremnasir/events{/privacy}", "received_events_url": "https://api.github.com/users/iremnasir/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! That sounds good, do you want to open a PR?", "Yes, I am on it. Thanks!\r\n", "+1\r\nany updates?", "@iremnasir can you please let me know how exactly you managed to solve it? I am having the same issue. \r\n![Screenshot 2020-12-07 at 00 56 45](https://user-images.githubusercontent.com/28517335/101344563-ae506700-38ab-11eb-9e12-98abe95e238c.png)\r\n", "Hi, you can see my answer above for the solution in Expected Behavior section", "Got it. Thanks\r\n\r\nAs got the error for AutoTokenizer: `NameError: name 'AutoTokenizer' is not defined `\r\nJust imported all:\r\n`from transformers import * ` earlier I was only importing `pipeline` from transformers", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "Thanks @iremnasir, it worked. Why hasn't this been merged yet?" ]
1,605
1,620
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.6.12 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: parallel ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @sgugger ## Information Model I am using (Bert, XLNet ...): bert-large-uncased-whole-word-masking-finetuned-squad The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Trying to run this [example script](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) for TF, I kept on getting error: ``` InvalidArgumentError: Value for attr 'T' of string is not in the list of allowed values: float, double, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool ; NodeDef: {{node ArgMax}}; Op<name=ArgMax; signature=input:T, dimension:Tidx -> output:output_type; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_BOOL]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=output_type:type,default=DT_INT64,allowed=[DT_INT32, DT_INT64]> [Op:ArgMax] ``` at the line where` tf.argmax()` is called on `answer_start_scores` and `answer_end_scores` ## Expected behavior This error is normal since `type(answer_start_scores)` is `str`, so I propose the following amendment to the documentation: from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering ``` import tensorflow as tf tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") model = TFAutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad", return_dict=True) text = r""" 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch. """ questions = [ "How many pretrained models are available in 🤗 Transformers?", "What does 🤗 Transformers provide?", "🤗 Transformers provides interoperability between which frameworks?", ] for question in questions: inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="tf") input_ids = inputs["input_ids"].numpy()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) output = model(inputs) answer_start = tf.argmax( output.start_logits, axis=1 ).numpy()[0] # Get the most likely beginning of answer with the argmax of the score answer_end = ( tf.argmax(output.end_logits, axis=1) + 1 ).numpy()[0] # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print(f"Question: {question}") print(f"Answer: {answer}") ``` Best, Irem
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8459/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8459/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8458
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8458/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8458/comments
https://api.github.com/repos/huggingface/transformers/issues/8458/events
https://github.com/huggingface/transformers/pull/8458
740,647,359
MDExOlB1bGxSZXF1ZXN0NTE5MDk4NDQ2
8,458
Fix logging in the examples
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The five examples scripts mentioned above all already use the huggingface logger, and already set it to the correct verbosity level. Seeing as the example scripts are *examples*, they really should be as straightforward as possible, and should not repeat unnecessary statements.\r\n\r\nPlease update the following files: `run_clm.py`, `run_mlm.py`, `run_glue.py`, `run_mlm_wwm.py`, `run_plm.py` so that they do not have useless statements.\r\n\r\n@sgugger pointed you towards the correct area, you should be able to find it if you look for \"logging\".", "I don't see the import neither the usage. Can you link it please?", "https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py#L170", "Ok got it I was looking at my own updated file ahah.", "It is also in your file, below https://github.com/huggingface/transformers/blob/d410b83111238f7b949b7c9c6a4c3f689d29519b/examples/language-modeling/run_clm.py#L176", "Ok I have removed the duplicate import and add only the two missing calls.", "Already done the changes 👍 ", "No, the changes are not done:\r\n1. in the scripts like the new `run_glue.py`, the code for logging is duplicated and executed once for every process, then once for the main process.\r\n2. in all the other scripts, the code is executed on all processes.\r\n\r\nThis is also added in util files or test files where it's just not necessary.", "I really don't get what you mean, sorry. What are the exact changes I have to make for each file?", "@sgugger Please check the last commit and let me know if it was what you meant. Otherwise, please, can you be more specific in what you want me to change." ]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR updates all the examples to use the Transformers logging util. Before no logs was displayed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8458/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8458", "html_url": "https://github.com/huggingface/transformers/pull/8458", "diff_url": "https://github.com/huggingface/transformers/pull/8458.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8458.patch", "merged_at": 1605206580000 }
https://api.github.com/repos/huggingface/transformers/issues/8457
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8457/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8457/comments
https://api.github.com/repos/huggingface/transformers/issues/8457/events
https://github.com/huggingface/transformers/pull/8457
740,599,904
MDExOlB1bGxSZXF1ZXN0NTE5MDYwMjkw
8,457
Correct mark down grammar in readme file
{ "login": "wlhgtc", "id": 16603773, "node_id": "MDQ6VXNlcjE2NjAzNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wlhgtc", "html_url": "https://github.com/wlhgtc", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "repos_url": "https://api.github.com/users/wlhgtc/repos", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,605
1,614
1,614
CONTRIBUTOR
null
I correct the mark down grammar about WWM readme, in order to keep all things right.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8457/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8457", "html_url": "https://github.com/huggingface/transformers/pull/8457", "diff_url": "https://github.com/huggingface/transformers/pull/8457.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8457.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8456
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8456/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8456/comments
https://api.github.com/repos/huggingface/transformers/issues/8456/events
https://github.com/huggingface/transformers/issues/8456
740,525,316
MDU6SXNzdWU3NDA1MjUzMTY=
8,456
ValueError: No gradients provided for any variable: ['tf_bert_for_masked_lm_6/bert/embeddings/word_embeddings/weight:0', 'tf_bert_for_masked_lm_6/bert/embeddings/position_embeddings/embeddings:0'......
{ "login": "MarsSu0618", "id": 72376532, "node_id": "MDQ6VXNlcjcyMzc2NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/72376532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarsSu0618", "html_url": "https://github.com/MarsSu0618", "followers_url": "https://api.github.com/users/MarsSu0618/followers", "following_url": "https://api.github.com/users/MarsSu0618/following{/other_user}", "gists_url": "https://api.github.com/users/MarsSu0618/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarsSu0618/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarsSu0618/subscriptions", "organizations_url": "https://api.github.com/users/MarsSu0618/orgs", "repos_url": "https://api.github.com/users/MarsSu0618/repos", "events_url": "https://api.github.com/users/MarsSu0618/events{/privacy}", "received_events_url": "https://api.github.com/users/MarsSu0618/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "change other questions.", "How did u solve this issue?", "How did u solve this problem? I got same problem like yours" ]
1,605
1,621
1,605
NONE
null
Hi! Everyone. I encounter some problems with TFBertForMaskedLM. I modify TFBertForMaskedLM layer according to "Condition-Bert Contextual Augmentation" paper. In short, my dataset sentences have 5 labels, then change type_token_ids to label_ids. so, i change bert.embeddings.token_type_embeddings . my model code as follows: ```python from_pretrain = 'bert-base-chinese' def create_model(): mlm_model = TFBertForMaskedLM.from_pretrained(from_pretrain, return_dict=True) mlm_model.bert.embeddings.token_type_embeddings = tf.keras.layers.Embedding(6, 768) return tf_bert_mlm_model model = create_model() ``` then, my tf dataset tensor as follows(batch_size=2): ``` {'input_ids': <tf.Tensor: shape=(2, 128), dtype=int32, numpy= array([[ 101, 103, 103, 928, 6249, 6244, 103, 7361, 4534, 5022, 3300, 3126, 511, 6313, 4825, 6291, 5206, 7514, 6352, 1162, 103, 100, 6745, 1057, 5080, 6244, 103, 6349, 4826, 103, 9039, 8599, 1564, 4500, 6257, 991, 6291, 6349, 3302, 1243, 103, 6313, 1257, 2200, 7710, 6349, 4826, 6752, 4761, 800, 103, 8024, 7344, 3632, 3300, 2552, 782, 1894, 4671, 4500, 511, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 101, 523, 791, 8532, 677, 5221, 524, 1920, 686, 103, 6240, 4634, 2466, 103, 2695, 103, 519, 5064, 1918, 736, 2336, 520, 4158, 2695, 1564, 4923, 8013, 678, 6734, 8038, 8532, 131, 120, 120, 8373, 119, 103, 9989, 119, 8450, 103, 100, 120, 12990, 8921, 8165, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 128), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 128), dtype=int32, numpy= array([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'labels': <tf.Tensor: shape=(2, 128), dtype=int32, numpy= array([[ -100, 704, 1751, -100, -100, -100, 2622, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 4826, -100, 6745, -100, -100, -100, 7710, -100, -100, 10873, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 8024, -100, -100, -100, -100, -100, -100, -100, -100, -100, 782, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100], [ -100, -100, -100, 3189, -100, -100, -100, -100, -100, 4518, -100, -100, -100, 2204, -100, 100, -100, -100, -100, -100, -100, -100, -100, 2695, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 8429, -100, -100, -100, 120, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]], dtype=int32)>} ``` and model.compile() and model.fit() as follows: ```python optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.fit(tf_sms_dataset, epochs=2) ``` But I always get the error message of ``` ValueError: in user code: /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:796 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica return fn(*args, **kwargs) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:789 run_step ** outputs = model.train_step(data) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:757 train_step self.trainable_variables) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:2737 _minimize trainable_variables)) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:562 _aggregate_gradients filtered_grads_and_vars = _filter_grads(grads_and_vars) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1271 _filter_grads ([v.name for _, v in grads_and_vars],)) ValueError: No gradients provided for any variable: ['tf_bert_for_masked_lm_6/bert/embeddings/word_embeddings/weight:0', 'tf_bert_for_masked_lm_6/bert/embeddings/position_embeddings/embeddings:0', 'tf_bert_for_masked_lm_6/bert/embeddings/LayerNorm/gamma:0', ..........] ``` How to solve the problem? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8456/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8455
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8455/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8455/comments
https://api.github.com/repos/huggingface/transformers/issues/8455/events
https://github.com/huggingface/transformers/issues/8455
740,515,573
MDU6SXNzdWU3NDA1MTU1NzM=
8,455
Can't down models from huggingface.cn!
{ "login": "havetry", "id": 49902228, "node_id": "MDQ6VXNlcjQ5OTAyMjI4", "avatar_url": "https://avatars.githubusercontent.com/u/49902228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/havetry", "html_url": "https://github.com/havetry", "followers_url": "https://api.github.com/users/havetry/followers", "following_url": "https://api.github.com/users/havetry/following{/other_user}", "gists_url": "https://api.github.com/users/havetry/gists{/gist_id}", "starred_url": "https://api.github.com/users/havetry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/havetry/subscriptions", "organizations_url": "https://api.github.com/users/havetry/orgs", "repos_url": "https://api.github.com/users/havetry/repos", "events_url": "https://api.github.com/users/havetry/events{/privacy}", "received_events_url": "https://api.github.com/users/havetry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of #8449 " ]
1,605
1,605
1,605
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I want to down model from huggingface.cn, but I can't find the models to down. ![down](https://user-images.githubusercontent.com/49902228/98777922-8bf23780-242c-11eb-90b9-02d86cefac34.png) But I found the model had been download many times in noverber 9. Something happened I didn't know about? And what should I do to get models that I need? ![have_down](https://user-images.githubusercontent.com/49902228/98778321-23578a80-242d-11eb-9503-c46f3cde25f3.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8455/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8454
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8454/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8454/comments
https://api.github.com/repos/huggingface/transformers/issues/8454/events
https://github.com/huggingface/transformers/issues/8454
740,506,768
MDU6SXNzdWU3NDA1MDY3Njg=
8,454
Add POINTER model
{ "login": "dreasysnail", "id": 2461039, "node_id": "MDQ6VXNlcjI0NjEwMzk=", "avatar_url": "https://avatars.githubusercontent.com/u/2461039?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dreasysnail", "html_url": "https://github.com/dreasysnail", "followers_url": "https://api.github.com/users/dreasysnail/followers", "following_url": "https://api.github.com/users/dreasysnail/following{/other_user}", "gists_url": "https://api.github.com/users/dreasysnail/gists{/gist_id}", "starred_url": "https://api.github.com/users/dreasysnail/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dreasysnail/subscriptions", "organizations_url": "https://api.github.com/users/dreasysnail/orgs", "repos_url": "https://api.github.com/users/dreasysnail/repos", "events_url": "https://api.github.com/users/dreasysnail/events{/privacy}", "received_events_url": "https://api.github.com/users/dreasysnail/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Thanks @patrickvonplaten for taking this. It's nice to work with you again :)", "Really interesting approach :hugs: \r\n\r\n@dreasysnail Do you think it is possible to pre-train a model from scratch on **one** GPU in a reasonable time? Could you say something about your used hardware setup and training time for the pre-training phase :thinking: ", "Thanks @stefan-it ! Regarding your question:\r\n\r\n> @dreasysnail Do you think it is possible to pre-train a model from scratch on **one** GPU in a reasonable time? Could you say something about your used hardware setup and training time for the pre-training phase 🤔\r\n\r\nThe speed advantage of this algorithm is more on the decoding side. For the training time, you can expect this takes roughly similar amount of time comparing to, say, fine-tuning a BERT. One GPU is possible but if your dataset is large the training could be slow. So I would recommend you fine-tune from what we have already pretrained for fast convergence and better quality. \r\n\r\nFor your reference, we were using 8/16*V100 GPUs to pretrain and fine-tune the models. The pretraining takes roughly one week and the fine-tuning takes 1-2 days. \r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,631
null
CONTRIBUTOR
null
# 🌟 New model addition ## Model description [POINTER](https://github.com/dreasysnail/POINTER) is a progressive and non-autoregressive text generation pre-training approach, published on EMNLP 2020 by Microsoft Research. POINTER generates fluent text in a progressive and parallel manner. With empirical logarithmic time, POINTER outperforms existing non-autoregressive text generation approaches in hard-constrained text generation. The model uses basically BERT-large architecture. However, an additional token is added to the vocab. The inference is performed by passing the input iteratively to the model. Since there is no existing model architecture in Huggingface that is compatible, I am not sure how to incorporate this into the model card. ## Open source status * [x] the model implementation is available: (https://github.com/dreasysnail/POINTER) * [x] the model weights are available: [here](https://yizzhang.blob.core.windows.net/insertiont/ckpt.tar.gz?st=2020-08-18T20%3A49%3A02Z&se=2024-01-16T20%3A49%3A00Z&sp=rl&sv=2018-03-28&sr=b&sig=PKrSJt38cmY0P%2FBcZuyK%2Btm3bXyYzzfazaqTu1%2F%2FDtc%3D) * [x] who are the authors: @dreasysnail
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8454/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 2, "rocket": 2, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/8454/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/8453
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8453/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8453/comments
https://api.github.com/repos/huggingface/transformers/issues/8453/events
https://github.com/huggingface/transformers/issues/8453
740,457,445
MDU6SXNzdWU3NDA0NTc0NDU=
8,453
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
{ "login": "JeremySun1224", "id": 45678423, "node_id": "MDQ6VXNlcjQ1Njc4NDIz", "avatar_url": "https://avatars.githubusercontent.com/u/45678423?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JeremySun1224", "html_url": "https://github.com/JeremySun1224", "followers_url": "https://api.github.com/users/JeremySun1224/followers", "following_url": "https://api.github.com/users/JeremySun1224/following{/other_user}", "gists_url": "https://api.github.com/users/JeremySun1224/gists{/gist_id}", "starred_url": "https://api.github.com/users/JeremySun1224/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JeremySun1224/subscriptions", "organizations_url": "https://api.github.com/users/JeremySun1224/orgs", "repos_url": "https://api.github.com/users/JeremySun1224/repos", "events_url": "https://api.github.com/users/JeremySun1224/events{/privacy}", "received_events_url": "https://api.github.com/users/JeremySun1224/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe @sgugger has an idea", "This is a duplicate of #8212 which gives the workaround (install python 3.7) while waiting for the new release of datasets, which will fix that bug.", "Maybe we should do a quick patch release of `datasets` just for this one @lhoestq?", "It seems an issue ´till Python 3.6 on Pickle (I´m exactly on 3.6). As I´m (really) on a hurry, just commented the error on Pickle and It run normally.", "> This is a duplicate of #8212 which gives the workaround (install python 3.7) while waiting for the new release of datasets, which will fix that bug.\r\n\r\n**Thank you very much. 🌹**\r\nI did run the above code correctly in Python 3.7.9, but it's strange that the following error occurred. This didn't happen in my previous Python 3.6.9 environment.\r\nTraceback (most recent call last):\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\connectionpool.py\", line 696, in urlopen\r\n self._prepare_proxy(conn)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\connectionpool.py\", line 964, in _prepare_proxy\r\n conn.connect()\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\connection.py\", line 359, in connect\r\n conn = self._connect_tls_proxy(hostname, conn)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\connection.py\", line 502, in _connect_tls_proxy\r\n ssl_context=ssl_context,\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\util\\ssl_.py\", line 424, in ssl_wrap_socket\r\n ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\util\\ssl_.py\", line 466, in _ssl_wrap_socket_impl\r\n return ssl_context.wrap_socket(sock)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\ssl.py\", line 423, in wrap_socket\r\n session=session\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\ssl.py\", line 870, in _create\r\n self.do_handshake()\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\ssl.py\", line 1139, in do_handshake\r\n self._sslobj.do_handshake()\r\n**ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)**\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\requests\\adapters.py\", line 449, in send\r\n timeout=timeout\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\connectionpool.py\", line 756, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\urllib3\\util\\retry.py\", line 573, in increment\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/text/text.py (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)')))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"./run_mlm_wwm.py\", line 336, in <module>\r\n main()\r\n File \"./run_mlm_wwm.py\", line 213, in main\r\n datasets = load_dataset(extension, data_files=data_files)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\datasets\\load.py\", line 590, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\datasets\\load.py\", line 264, in prepare_module\r\n head_hf_s3(path, filename=name, dataset=dataset)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 200, in head_hf_s3\r\n return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\requests\\api.py\", line 104, in head\r\n return request('head', url, **kwargs)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\requests\\api.py\", line 61, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\requests\\sessions.py\", line 542, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\requests\\sessions.py\", line 655, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"D:\\Anaconda3\\envs\\RobertaWWMExt\\lib\\site-packages\\requests\\adapters.py\", line 514, in send\r\n raise SSLError(e, request=request)\r\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/text/text.py **(Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)')))**\r\n\r\nCould you give me a suggestion about this bug, please.", "Pinging @julien-c and @Pierrci here additionally (connection to S3)", "> Maybe we should do a quick patch release of `datasets` just for this one @lhoestq?\r\n\r\nYes will do a patch release soon", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.4.0 - Platform:linux - Python version:3.6 - PyTorch version (GPU?):1.6 cuda10 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: When I try to train roberta-wwm from scratch for my dataset , I get this error when I follow transformers' run_mlm_wwm.py code ``` !python run_mlm_wwm.py --model_name_or_path hfl/chinese-roberta-wwm-ext --train_file ../../../../pretrain_data/pretrain_train.txt --validation_file ../../../../pretrain_data/pretrain_val.txt --train_ref_file ../../../../pretrain_data/ref_train.txt --validation_ref_file ../../../../pretrain_data/ref_val.txt --do_train --do_eval --output_dir ./output ``` ``` All the weights of BertForMaskedLM were initialized from the model checkpoint at hfl/chinese-roberta-wwm-ext. If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training. Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. Traceback (most recent call last): File "run_mlm_wwm.py", line 333, in <module> main() File "run_mlm_wwm.py", line 274, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in map for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map update_data=update_data, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 367, in dumps dump(obj, file) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 339, in dump Pickler(file, recurse=True).dump(obj) File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/usr/lib/python3.6/pickle.py", line 409, in dump self.save(obj) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1447, in save_function obj.__dict__, fkwdefaults), obj=obj) File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1178, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python3.6/pickle.py", line 605, in save_reduce save(cls) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1374, in save_type obj.__bases__, _dict), obj=obj) File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/usr/lib/python3.6/pickle.py", line 507, in save self.save_global(obj, rv) File "/usr/lib/python3.6/pickle.py", line 927, in save_global (obj, module_name, name)) **_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union** ``` please help me. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8453/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/8453/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8452
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8452/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8452/comments
https://api.github.com/repos/huggingface/transformers/issues/8452/events
https://github.com/huggingface/transformers/issues/8452
740,445,596
MDU6SXNzdWU3NDA0NDU1OTY=
8,452
Fine-tuning GPT: problems with padding
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Another suspicious flag in the config for GPT is `predict_special_tokens` which is set to `True` (no such thing for GPT2 config). I did a grep on this flag and it seems to be present only in the config class and not used anywhere else. Somewhat strange. ", "I might have found a problem in my scripts/code. I've been using BERT-based models so far and when examples are converted to features, the batch encoder is initialized with padding set to max_length; I'm trying to initialize it to do_not_pad. In theory, this should fix it. In practice... we shall see :) ", "Indeed, the root of the issue seems to be that you're asking your tokenizer to pad the sequences, but it does not have a padding token, and therefore cannot do so.\r\n\r\nIf setting the tokenizer's pad token to the eos token doesn't work, you can try adding a new token to the tokenizer with the `add_special_tokens()` method, and then resize the model embedding layer.\r\n\r\nSeeing as you should use the attention mask when padding, these tokens should have close to zero influence on your training.\r\n\r\nSee the docs about the aforementioned methods [here](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=resize_token_embeddings#transformers.tokenization_utils_base.SpecialTokensMixin.add_special_tokens)", "Isn't it more straightforward to ask the tokenizer to not pad the sequence (for gpt* models)?\r\n \r\nThe confusion came from the fact that setting the padding token to eos works for GPT2* models (because eos is defined in the config of the pretrained model), but doesn't for GPT (because eos is not defined)", "So no padding seems to work looking at a few samples (but no batching possible). I'll start a few training jobs, I'll know tomorrow if it really trained properly (large dataset). ", "Yes, it is more straightforward, but as you've said, no batching can be made. This is quite limiting and would tremendously slow down the training; if your training is small enough then that might still be enough!", "@LysandreJik It's slow indeed, but I think I can live with it. I can't recall what the problem was, even fro gpt2 where I could assigned pad = eos, I got an error when I tried to batch. ", "Ah, this is weird. If you ever stumble upon this issue again, please let us know so that we may see what's wrong. Thanks!", "To get GPT2 to work, you'll also need to update the config's pad token to be the eos token:\r\n`config.pad_token_id = config.eos_token_id`\r\n\r\nFor example, in `examples/lightning_base.py`, I've added the below lines right after loading the tokenizer in BaseTransformer().\\_\\_init\\_\\_():\r\n```py\r\n if self.tokenizer.pad_token is None:\r\n self.tokenizer.pad_token = self.tokenizer.eos_token\r\n self.config.pad_token_id = self.config.eos_token_id\r\n```", "@ethanjperez thanks for the tip, I'll give it a try!\r\n", "I think this works. I managed to train gpt and gpt2. I have an issue during evaluation with gpt2, but I don't think it's related. Closing this one, thanks @ethanjperez ", "```\r\n if self.tokenizer.pad_token is None:\r\n self.tokenizer.pad_token = self.tokenizer.eos_token\r\n self.config.pad_token_id = self.config.eos_token_id\r\n```\r\n\r\nUsing this can train, but save will meet so many problems!\r\n\r\nhttps://github.com/huggingface/transformers/issues/5571\r\n\r\n" ]
1,605
1,685
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik tokenizers: @mfuntowicz ## Information Model I am using openai-gpt: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The scripts are my own scripts inspired by the glue examples. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Simple binary text classification, nothing fancy, inspired by the glue example files. ## To reproduce Steps to reproduce the behavior: As reported in other issues, padding is not done for GPT* models. One workaround for this issue is to set the padding token to the eos token. This seems to work fine for the GPT2 models (I tried GPT2 and DistilGPT2), but creates some issues for the GPT model. Comparing the outputs of the two models, it looks like the config file for the GPT2 models contains ids for bos and eos tokens, while these are missing from the GPT config file (not sure this is the real problem). Some other interesting bits from the outputs: ``` ftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy. Using eos_token, but it is not set yet. ``` Bottom line, it crashes with ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` - despite the fact that I have `tokenizer.pad_token = tokenizer.eos_token` in the code. I'm expecting some issue with the tokenizer/missing ids for the special tokens. Wondering if there is something missing in the config file for the model. ## Expected behavior No error? :) I don't see any of these issues after setting the padding token to the eos token for the GPT2 model. As I briefly mentioned above, the only difference that I see in the config file is the ids for the eos/bos tokens, which seem to be missing from the GPT model config. Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8452/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8452/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8451
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8451/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8451/comments
https://api.github.com/repos/huggingface/transformers/issues/8451/events
https://github.com/huggingface/transformers/issues/8451
740,442,985
MDU6SXNzdWU3NDA0NDI5ODU=
8,451
config.attention_head_size for structured pruning out-of-box
{ "login": "dsindex", "id": 8259057, "node_id": "MDQ6VXNlcjgyNTkwNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8259057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dsindex", "html_url": "https://github.com/dsindex", "followers_url": "https://api.github.com/users/dsindex/followers", "following_url": "https://api.github.com/users/dsindex/following{/other_user}", "gists_url": "https://api.github.com/users/dsindex/gists{/gist_id}", "starred_url": "https://api.github.com/users/dsindex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dsindex/subscriptions", "organizations_url": "https://api.github.com/users/dsindex/orgs", "repos_url": "https://api.github.com/users/dsindex/repos", "events_url": "https://api.github.com/users/dsindex/events{/privacy}", "received_events_url": "https://api.github.com/users/dsindex/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@dsindex FYI, I am working on creating a PR including this feature for the effort of #8083 .", "@ykim362 great! closing issue here." ]
1,605
1,605
1,605
NONE
null
# 🚀 Feature request ## Motivation for structured pruning like `fastformers`, https://github.com/microsoft/fastformers#pruning-models , we should modify the source code of transformers for `attention_head_size. for example, 1. configuration_bert.py https://github.com/microsoft/fastformers/blob/main/src/transformers/configuration_bert.py#L128 2. modeling_bert.py https://github.com/microsoft/fastformers/blob/main/src/transformers/modeling_bert.py#L192 https://github.com/microsoft/fastformers/blob/main/src/transformers/modeling_bert.py#L263 is it possible to set `attention_head_size` from outside(config.json) ? ## Your contribution
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8451/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8450
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8450/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8450/comments
https://api.github.com/repos/huggingface/transformers/issues/8450/events
https://github.com/huggingface/transformers/issues/8450
740,420,659
MDU6SXNzdWU3NDA0MjA2NTk=
8,450
A layman wants to train DistilBERT
{ "login": "WalterZhong", "id": 38905736, "node_id": "MDQ6VXNlcjM4OTA1NzM2", "avatar_url": "https://avatars.githubusercontent.com/u/38905736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WalterZhong", "html_url": "https://github.com/WalterZhong", "followers_url": "https://api.github.com/users/WalterZhong/followers", "following_url": "https://api.github.com/users/WalterZhong/following{/other_user}", "gists_url": "https://api.github.com/users/WalterZhong/gists{/gist_id}", "starred_url": "https://api.github.com/users/WalterZhong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WalterZhong/subscriptions", "organizations_url": "https://api.github.com/users/WalterZhong/orgs", "repos_url": "https://api.github.com/users/WalterZhong/repos", "events_url": "https://api.github.com/users/WalterZhong/events{/privacy}", "received_events_url": "https://api.github.com/users/WalterZhong/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "[https://github.com/google-research/bert](url)\r\n\r\nIn this url, I find some files.\r\n![image](https://user-images.githubusercontent.com/38905736/98761374-3c057780-2410-11eb-8028-4f896e86aa24.png)\r\n\r\nBut I don't whether they are the data which used to be the training data of distilbert.\r\nAnd the distilbert model need \"dump.txt \", not \".json\", whether the data we need is included in it?", "I find a text in the goole project bert.\r\n[https://github.com/google-research/bert/blob/master/sample_text.txt](url)\r\nIs it the training data of this project?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,611
1,611
NONE
null
I want to train DistilBERT, but I don't know how to get the training data. In the article, it describes the training data as a concatenation of Toronto Book Corpus and English Wikipedia (same training data as the English version of BERT). So how can I get it? ![image](https://user-images.githubusercontent.com/38905736/98760883-378c8f00-240f-11eb-881a-2048e2967617.png) [https://github.com/huggingface/transformers/blob/1ab8dc44b3d84ed1894f5b6a6fab58fb39298fc7/examples/distillation/README.md](url)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8450/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8450/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8449
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8449/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8449/comments
https://api.github.com/repos/huggingface/transformers/issues/8449/events
https://github.com/huggingface/transformers/issues/8449
740,414,070
MDU6SXNzdWU3NDA0MTQwNzA=
8,449
Can't find model to down
{ "login": "havetry", "id": 49902228, "node_id": "MDQ6VXNlcjQ5OTAyMjI4", "avatar_url": "https://avatars.githubusercontent.com/u/49902228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/havetry", "html_url": "https://github.com/havetry", "followers_url": "https://api.github.com/users/havetry/followers", "following_url": "https://api.github.com/users/havetry/following{/other_user}", "gists_url": "https://api.github.com/users/havetry/gists{/gist_id}", "starred_url": "https://api.github.com/users/havetry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/havetry/subscriptions", "organizations_url": "https://api.github.com/users/havetry/orgs", "repos_url": "https://api.github.com/users/havetry/repos", "events_url": "https://api.github.com/users/havetry/events{/privacy}", "received_events_url": "https://api.github.com/users/havetry/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I encountered the same problem", "Will release a new version of that UX tomorrow", "We just added file sizes, and download links, to the lists of model files, see for instance:\r\n\r\n<img width=\"1592\" alt=\"Screenshot 2020-11-13 at 22 55 23\" src=\"https://user-images.githubusercontent.com/326577/99125288-be896500-25d1-11eb-84f5-03eb9b44f29d.png\">\r\n\r\nhttps://huggingface.co/dbmdz/bert-base-turkish-cased/tree/main\r\n\r\nLet us know if this solves your use case @havetry @xlxwalex.", "> We just added file sizes, and download links, to the lists of model files, see for instance:\r\n> \r\n> <img alt=\"Screenshot 2020-11-13 at 22 55 23\" width=\"1592\" src=\"https://user-images.githubusercontent.com/326577/99125288-be896500-25d1-11eb-84f5-03eb9b44f29d.png\">\r\n> \r\n> https://huggingface.co/dbmdz/bert-base-turkish-cased/tree/main\r\n> \r\n> Let us know if this solves your use case @havetry @xlxwalex.\r\n\r\nGood job, thanks!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,605
1,619
1,619
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I can't find any model to down in "huggingface.cn", so what happened? ![down](https://user-images.githubusercontent.com/49902228/98759351-e202b300-240b-11eb-87ea-e2f495d3afbe.png) but I find many people downed the model in Noverber 9, so why I can't find model today? <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8449/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8448
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8448/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8448/comments
https://api.github.com/repos/huggingface/transformers/issues/8448/events
https://github.com/huggingface/transformers/issues/8448
740,197,842
MDU6SXNzdWU3NDAxOTc4NDI=
8,448
Make sure the slot variables are created under the same strategy scope.
{ "login": "vlreinier", "id": 43336873, "node_id": "MDQ6VXNlcjQzMzM2ODcz", "avatar_url": "https://avatars.githubusercontent.com/u/43336873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vlreinier", "html_url": "https://github.com/vlreinier", "followers_url": "https://api.github.com/users/vlreinier/followers", "following_url": "https://api.github.com/users/vlreinier/following{/other_user}", "gists_url": "https://api.github.com/users/vlreinier/gists{/gist_id}", "starred_url": "https://api.github.com/users/vlreinier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vlreinier/subscriptions", "organizations_url": "https://api.github.com/users/vlreinier/orgs", "repos_url": "https://api.github.com/users/vlreinier/repos", "events_url": "https://api.github.com/users/vlreinier/events{/privacy}", "received_events_url": "https://api.github.com/users/vlreinier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think i already got an answer to my question from this page https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py :)\r\n\r\nwith training_args.strategy.scope():\r\n model = TFBertForTokenClassification.from_pretrained(bert_model,\r\n cache_dir=cache_dir, \r\n num_labels=len(label2id), \r\n label2id=label2id, \r\n id2label={v:k for k,v in label2id.items()}\r\n )\r\n model.summary()" ]
1,605
1,605
1,605
NONE
null
- `transformers` version: 3.5.0 - Platform: jupyter notebook - Python version: 3.6.9 - Tensorflow version (GPU?): 2.3.1 - Using GPU in script?: Single RTX 2080TI - Model: "distilbert-base-multilingual-cased" with tf.device('/device:GPU:0'): model.compile(optimizer=optimizer, loss=loss_fn) model.fit(train_dataset.batch(batch_size_train), epochs=1) Code above works fine. Using TFtrainer with code below produces strategy error. Note: tf.config.list_physical_devices('GPU') gives [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Steps to reproduce the behavior: from transformers import BertTokenizer, TFBertForTokenClassification from transformers import __version__ from transformers import TFTrainer, TFTrainingArguments label2id = {"False": 0, "True": 1} bert_model = "distilbert-base-multilingual-cased" cache_dir = "cache/distilbert" model = TFBertForTokenClassification.from_pretrained( bert_model, cache_dir = cache_dir, num_labels = len(label2id), label2id = label2id, id2label = {v:k for k,v in label2id.items()} ) training_args = TFTrainingArguments( output_dir='cp', num_train_epochs=1, per_device_train_batch_size=16, warmup_steps=500, weight_decay=0.05, ) trainer = TFTrainer( model=model, args=training_args, train_dataset=train_dataset, ) trainer.train() ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7effe7039588>), which is different from the scope used for the original variable (<tf.Variable 'tf_bert_for_token_classification/bert/embeddings/word_embeddings/weight:0' shape=(119547, 768) dtype=float32, numpy= array([[ 0.01447453, -0.03549159, 0.03377417, ..., -0.01235564, 0.00624704, -0.01201372], [ 0.00689944, 0.00139387, -0.00716509, ..., 0.02127312, -0.00164859, -0.02350472], [ 0.00123599, -0.02220839, -0.01472212, ..., -0.02844208, -0.01958628, 0.01139562], ..., [ 0.03488934, 0.00115632, 0.0073231 , ..., 0.00768381, -0.02942067, -0.00667366], [ 0.00224815, -0.00895759, 0.0046453 , ..., 0.00379816, 0.00176853, -0.01759749], [-0.02521203, -0.03274821, -0.00520367, ..., -0.01396327, 0.0071948 , -0.01428833]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8448/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8447
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8447/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8447/comments
https://api.github.com/repos/huggingface/transformers/issues/8447/events
https://github.com/huggingface/transformers/issues/8447
740,168,649
MDU6SXNzdWU3NDAxNjg2NDk=
8,447
Model name 'facebook/rag-sequence-base/*' not found when running examples/rag/finetune.sh
{ "login": "sabetAI", "id": 28828395, "node_id": "MDQ6VXNlcjI4ODI4Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/28828395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sabetAI", "html_url": "https://github.com/sabetAI", "followers_url": "https://api.github.com/users/sabetAI/followers", "following_url": "https://api.github.com/users/sabetAI/following{/other_user}", "gists_url": "https://api.github.com/users/sabetAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/sabetAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sabetAI/subscriptions", "organizations_url": "https://api.github.com/users/sabetAI/orgs", "repos_url": "https://api.github.com/users/sabetAI/repos", "events_url": "https://api.github.com/users/sabetAI/events{/privacy}", "received_events_url": "https://api.github.com/users/sabetAI/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I have a related issue. This happen to `\"facebook/rag-token-base\"` and `\"facebook/rag-token-nq\"` and `\"facebook/rag-sequence-nq\"` as well.\r\n\r\nBasic loading failed (was able to do it until around 2 days ago -- I use version 3.5.0)\r\nBoth\r\n\r\n`tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-sequence-nq\")`\r\nand\r\n`retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)`\r\n\r\nresult in the same error message: \r\n\r\n`OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'.`\r\n\r\n<<< Seem like it add the wrong path `question_encoder_tokenizer` at the end.\r\n", "to add to @ratthachat's comment: I observe the same problem when loading the model with:\r\n\r\n`model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\") `\r\n", "Tagging @julien-c @Pierrci here. Maybe an issue related to the migration to git/git-lfs", "Initial poster seems to be running `transformers version: 3.3.1` which makes me suspect it might not be related to the git/git-lfs migration\r\n\r\nUpdate: @lhoestq is looking into it", "@lhoestq @julien-c @thomwolf \r\nSorry to ask, but I am translating TFRag and would really love to continue before long hollidays.\r\nCould it be possible to fix only the wrong file path (the last `question_encoder_tokenizer`) in \r\n\r\n`OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'.`\r\n\r\nto fix error of basic loading \r\n\r\n```\r\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-sequence-nq\")\r\n\r\nor\r\n\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n\r\nor\r\n\r\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\")\r\n```", "Apologies for any duplicate comments, but experiencing the same issue as @ratthachat.\r\nAny updates or fixes on this? Currently running transformers-3.5.1", "Hello, feel free to open a PR with your proposed fix and we'll take a look. Thanks!", "Can confirm that this error is eliminated when downgrading to:\r\n```\r\ntransformers==3.3.1\r\ntokenizers==0.9.2\r\ndatasets==1.1.2\r\n```\r\n\r\nLooks very likely that something went wrong in the transition to git-lfs for this use case.\r\n\r\n@thomwolf @julien-c ", "Thanks for the detailed reports everyone, this should now be fixed on `master`.", "@julien-c \r\n\r\nHi I am trying to run [use_own_knowledge_dataset.py](https://github.com/huggingface/transformers/blob/master/examples/rag/use_own_knowledge_dataset.py) with **Transformers Version: 3.5.1**. But it gives the following error.\r\n\r\n```\r\n\r\nOSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'. Make sure that:\r\n\r\n- 'facebook/rag-sequence-nq/question_encoder_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'facebook/rag-seq\r\n```uence-nq/question_encoder_tokenizer' is the correct path to a directory containing relevant tokenizer files\r\n", "> @julien-c\r\n> \r\n> Hi I am trying to run [use_own_knowledge_dataset.py](https://github.com/huggingface/transformers/blob/master/examples/rag/use_own_knowledge_dataset.py) with **Transformers Version: 3.5.1**. But it gives the following error.\r\n> \r\n> ```\r\n> \r\n> OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'. Make sure that:\r\n> \r\n> - 'facebook/rag-sequence-nq/question_encoder_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models'\r\n> \r\n> - or 'facebook/rag-seq\r\n> ```uence-nq/question_encoder_tokenizer' is the correct path to a directory containing relevant tokenizer files\r\n> ```\r\n\r\nHey @shamanez - could you open a separate issue for this and tag @lhoestq ? :-) ", "Sure :) ", "The fix is not yet in a released version only on `master`, so you need to install from master for now.", "so shall I install from sources?", "Thank you! When will the fixed version be released?" ]
1,605
1,606
1,605
NONE
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-38-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: True (Retriever is distributed) ### Who can help @patrickvonplaten, @lhoestq ## Information Model I am using (Bert, XLNet ...): **facebook/rag-sequence-base** The problem arises when using: * [x ] the official example scripts: (give details below) examples/rag/finetune.sh The tasks I am working on is: * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: run `sh finetune.sh` with ``` DATA_DIR=data_dir OUTPUT_DIR=output_dir MODEL_NAME_OR_PATH="facebook/rag-sequence-base" ``` gives: **Model name 'facebook/rag-sequence-base/question_encoder_tokenizer' not found in model shortcut name list (facebook/dpr-question_encoder-single-nq-base). Assuming 'facebook/rag-sequence-base/question_encoder_tokenizer' is a path, a model identifier, or url to a directory containing tokenizer files**. loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/vocab.txt from cache at /h/asabet/.cache/torch/transformers/14d599f015518cd5b95b5d567b8c06b265dbbf04047e44b3654efd7cbbacb697.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/added_tokens.json from cache at None loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/special_tokens_map.json from cache at /h/asabet/.cache/torch/transformers/70614c7a84151409876eaaaecb3b5185213aa5c560926855e35753b9909f1116.275045728fbf41c11d3dae08b8742c054377e18d92cc7b72b6351152a99b64e4 loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/tokenizer_config.json from cache at /h/asabet/.cache/torch/transformers/8ade9cf561f8c0a47d1c3785e850c57414d776b3795e21bd01e58483399d2de4.11f57497ee659e26f830788489816dbcb678d91ae48c06c50c9dc0e4438ec05b loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/tokenizer.json from cache at None **Model name 'facebook/rag-sequence-base/generator_tokenizer' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming 'facebook/rag-sequence-base/generator_tokenizer' is a path, a model identifier, or url to a directory containing tokenizer files.** loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/vocab.json from cache at /h/asabet/.cache/torch/transformers/3b9637b6eab4a48cf2bc596e5992aebb74de6e32c9ee660a27366a63a8020557.6a4061e8fc00057d21d80413635a86fdcf55b6e7594ad9e25257d2f99a02f4be loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/merges.txt from cache at /h/asabet/.cache/torch/transformers/b2a6adcb3b8a4c39e056d80a133951b99a56010158602cf85dee775936690c6a.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/added_tokens.json from cache at None loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/special_tokens_map.json from cache at /h/asabet/.cache/torch/transformers/342599872fb2f45f954699d3c67790c33b574cc552a4b433fedddc97e6a3c58e.6e217123a3ada61145de1f20b1443a1ec9aac93492a4bd1ce6a695935f0fd97a loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/tokenizer_config.json from cache at /h/asabet/.cache/torch/transformers/e5f72dc4c0b1ba585d7afb7fa5e3e52ff0e1f101e49572e2caaf38fab070d4d6.d596a549211eb890d3bb341f3a03307b199bc2d5ed81b3451618cbcb04d1f1bc loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/tokenizer.json from cache at None Traceback (most recent call last): File "finetune.py", line 499, in <module> main(args) File "finetune.py", line 439, in main model: GenerativeQAModule = GenerativeQAModule(args) File "finetune.py", line 105, in __init__ retriever = RagPyTorchDistributedRetriever.from_pretrained(hparams.model_name_or_path, config=config) File "/h/asabet/.local/lib/python3.6/site-packages/transformers/retrieval_rag.py", line 308, in from_pretrained config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer File "/scratch/ssd001/home/asabet/transformers/examples/rag/distributed_retriever.py", line 41, in __init__ index=index, **TypeError: __init__() got an unexpected keyword argument 'index'** ## Expected behavior finetune.sh should launch and run
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8447/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8447/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8446
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8446/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8446/comments
https://api.github.com/repos/huggingface/transformers/issues/8446/events
https://github.com/huggingface/transformers/pull/8446
740,127,389
MDExOlB1bGxSZXF1ZXN0NTE4NjY4Njcw
8,446
using multi_gpu consistently
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
As discussed [here](https://github.com/huggingface/transformers/pull/8341#issuecomment-722705833) this PR replaces * `multiple_gpu` * `multigpu` with `multi_gpu` for consistency There is no functionality change otherwise. I did repo-wide: ``` find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's#(multiple_gpu|multigpu)#multi_gpu#g' {} \; ``` @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8446/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8446/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8446", "html_url": "https://github.com/huggingface/transformers/pull/8446", "diff_url": "https://github.com/huggingface/transformers/pull/8446.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8446.patch", "merged_at": 1605032639000 }
https://api.github.com/repos/huggingface/transformers/issues/8445
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8445/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8445/comments
https://api.github.com/repos/huggingface/transformers/issues/8445/events
https://github.com/huggingface/transformers/pull/8445
740,127,323
MDExOlB1bGxSZXF1ZXN0NTE4NjY4NjE4
8,445
[marian.rst] remove combined lines
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I already did that in a commit to master, thanks for fixing too! :-)" ]
1,605
1,605
1,605
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8445/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8445", "html_url": "https://github.com/huggingface/transformers/pull/8445", "diff_url": "https://github.com/huggingface/transformers/pull/8445.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8445.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8444
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8444/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8444/comments
https://api.github.com/repos/huggingface/transformers/issues/8444/events
https://github.com/huggingface/transformers/pull/8444
740,070,005
MDExOlB1bGxSZXF1ZXN0NTE4NjIxMTM2
8,444
Add missing import
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? Fix a missing import for TF auto.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8444/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8444", "html_url": "https://github.com/huggingface/transformers/pull/8444", "diff_url": "https://github.com/huggingface/transformers/pull/8444.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8444.patch", "merged_at": 1605027693000 }
https://api.github.com/repos/huggingface/transformers/issues/8443
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8443/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8443/comments
https://api.github.com/repos/huggingface/transformers/issues/8443/events
https://github.com/huggingface/transformers/issues/8443
740,047,270
MDU6SXNzdWU3NDAwNDcyNzA=
8,443
Dropout p is changing after loading
{ "login": "burakisikli", "id": 982014, "node_id": "MDQ6VXNlcjk4MjAxNA==", "avatar_url": "https://avatars.githubusercontent.com/u/982014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/burakisikli", "html_url": "https://github.com/burakisikli", "followers_url": "https://api.github.com/users/burakisikli/followers", "following_url": "https://api.github.com/users/burakisikli/following{/other_user}", "gists_url": "https://api.github.com/users/burakisikli/gists{/gist_id}", "starred_url": "https://api.github.com/users/burakisikli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/burakisikli/subscriptions", "organizations_url": "https://api.github.com/users/burakisikli/orgs", "repos_url": "https://api.github.com/users/burakisikli/repos", "events_url": "https://api.github.com/users/burakisikli/events{/privacy}", "received_events_url": "https://api.github.com/users/burakisikli/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, this is not the best way to update the dropout value as it will get overridden by the configuration value on load.\r\n\r\nThe classifier in `BertForSequenceClassification` is a linear layer, that has no dropout. If you want to change the dropout which is applied before the linear layer, you should update the `config.hidden_dropout_prob`. You can see the source code [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1319).\r\n\r\nThe code is made to be easy to read and easy to tweak, so feel free to directly modify the source code to fit your needs.", "Hi,\r\nI've already tried it but it changes all of the output dropout layers value since each layer is using same config as you can see below. I think it'd be better to have a different dropout config for the last layer since bert official example is suggesting to optimize it with changing(https://github.com/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb). This also applies to roberta as well. I guess I need to modify the source code accordingly.\r\n\r\n```python\r\nconfig = BertConfig.from_pretrained('bert-base-uncased') \r\nconfig.hidden_dropout_prob=0.7\r\nmodel = BertForSequenceClassification.from_pretrained(\r\n \"bert-base-uncased\",\r\n config = config\r\n)\r\nmodel.cuda()\r\n```\r\n\r\nBertForSequenceClassification(\r\n (bert): BertModel(\r\n (embeddings): BertEmbeddings(\r\n (word_embeddings): Embedding(30522, 768, padding_idx=0)\r\n (position_embeddings): Embedding(512, 768)\r\n (token_type_embeddings): Embedding(2, 768)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n (encoder): BertEncoder(\r\n (layer): ModuleList(\r\n (0): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n (1): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n (2): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n ....\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n (pooler): BertPooler(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (activation): Tanh()\r\n )\r\n )\r\n (dropout): Dropout(p=0.7, inplace=False)\r\n (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n)", "Yes, the model files are completely independent of each other for that purpose: it should be very easy to modify each independent model file.\r\n\r\nFeel free to modify the model file so that it fits your needs.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,610
1,610
NONE
null
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik, @sgugger ## Information Model I am using (Bert, XLNet ...): Bert, Roberta The problem arises when using: * [ *] the official example scripts: Using information given in this link: https://huggingface.co/transformers/master/custom_datasets.html The tasks I am working on is: * [ *] my own task or dataset: text classification ## To reproduce Steps to reproduce the behavior: 1. I'm trying to change dropout probability. I'm using one of these methods for Bert instance: ```python model.classifier.dropout.p=0.7 model.classifier.dropout = nn.Dropout(0.7) ``` 2. After training is completed, model is saved ```python model.save_pretrained('xxx/bert') ``` 3. Model is loaded in another session using this code snippet. But after loading, model.classifier.dropout.p is changing to 0.1 which is in the config file. ```python model = BertForSequenceClassification.from_pretrained("xxx/bert", num_labels = 3, output_attentions = False, output_hidden_states = False, ) ``` ## Expected behavior Dropout p is changing to default value after loading the model. But the model is modified so that it shouldn't do that behavior
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8443/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8442
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8442/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8442/comments
https://api.github.com/repos/huggingface/transformers/issues/8442/events
https://github.com/huggingface/transformers/issues/8442
740,002,461
MDU6SXNzdWU3NDAwMDI0NjE=
8,442
Models fine-tuned with gradient checkpointing (=True) fails to export to ONXX
{ "login": "samru-rai", "id": 64474512, "node_id": "MDQ6VXNlcjY0NDc0NTEy", "avatar_url": "https://avatars.githubusercontent.com/u/64474512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samru-rai", "html_url": "https://github.com/samru-rai", "followers_url": "https://api.github.com/users/samru-rai/followers", "following_url": "https://api.github.com/users/samru-rai/following{/other_user}", "gists_url": "https://api.github.com/users/samru-rai/gists{/gist_id}", "starred_url": "https://api.github.com/users/samru-rai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samru-rai/subscriptions", "organizations_url": "https://api.github.com/users/samru-rai/orgs", "repos_url": "https://api.github.com/users/samru-rai/repos", "events_url": "https://api.github.com/users/samru-rai/events{/privacy}", "received_events_url": "https://api.github.com/users/samru-rai/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Indeed, I see why this would fail. I don't have access to your notebook, but as a temporary workaround you could do:\r\n\r\n```py\r\nmodel.save_pretrained(\"here\")\r\nmodel = ModelClass.from_pretrained(\"here\", gradient_checkpointing=False)\r\n```\r\n\r\nYou should be able to convert that model to ONNX then.", "I made it public now, my bad. Another work around I found was to edit the `config.json` by setting `\"gradient_checkpointing\": false`. I did this because the convert script looks for the model in a path and not in memory.", "Yes, this works too!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,605
1,610
1,610
NONE
null
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help Hi @LysandreJik and @patrickvonplaten, I hope I have tagged the right person. If not please untag yourself and tag the right person. ## Information Model I am using (Bert): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) It's a super simple script that uses gradient check-pointing with BERT. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) A dummy data ## To reproduce I have made a reproducible google collab here -> https://colab.research.google.com/drive/1tUpIzbugZ4xPz6eAOJtZT-fGww9LemwN?usp=sharing Steps to reproduce the behavior: 1. Open the notebook 2. Runtime->Run all Error thrown: ```python RuntimeError Traceback (most recent call last) <ipython-input-5-2702a59a9c3e> in <module>() 7 tokenizer=tokenizer, # <-- CHANGED: add tokenizer 8 output=Path("onnx/bert-base-cased.onnx"), ----> 9 opset=11) 10 11 # Tensorflow 4 frames /usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes) 648 params_dict, opset_version, dynamic_axes, defer_weight_export, 649 operator_export_type, strip_doc_string, val_keep_init_as_ip, custom_opsets, --> 650 val_add_node_names, val_use_external_data_format, model_file_location) 651 else: 652 proto, export_map = graph._export_onnx( RuntimeError: ONNX export failed: Couldn't export Python operator CheckpointFunction ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8442/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8441
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8441/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8441/comments
https://api.github.com/repos/huggingface/transformers/issues/8441/events
https://github.com/huggingface/transformers/issues/8441
739,954,560
MDU6SXNzdWU3Mzk5NTQ1NjA=
8,441
CUDA out of memory (ALBERT)!!
{ "login": "ppyu", "id": 32732750, "node_id": "MDQ6VXNlcjMyNzMyNzUw", "avatar_url": "https://avatars.githubusercontent.com/u/32732750?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ppyu", "html_url": "https://github.com/ppyu", "followers_url": "https://api.github.com/users/ppyu/followers", "following_url": "https://api.github.com/users/ppyu/following{/other_user}", "gists_url": "https://api.github.com/users/ppyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ppyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ppyu/subscriptions", "organizations_url": "https://api.github.com/users/ppyu/orgs", "repos_url": "https://api.github.com/users/ppyu/repos", "events_url": "https://api.github.com/users/ppyu/events{/privacy}", "received_events_url": "https://api.github.com/users/ppyu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there. Questions like this should be asked on the forum. In your code example, you are using different variable names for each of your three call, so it's logical to get more memory consumption. Python will only release the memory if you reuse the same variable name.", "Hi,friends.\r\n`you are using different variable names for each of your three call`\r\nwhat do you mean? \r\nMy problem is that:\r\nwhen I first call `outputs = self.albert()` , the memory is almost 4G\r\nAnd I second call `question, _ = self.albert()` , the memory increase to almost 8G.\r\nBut the variable `question` should not take up so much memory.", "> Hi there. Questions like this should be asked on the forum. In your code example, you are using different variable names for each of your three call, so it's logical to get more memory consumption. Python will only release the memory if you reuse the same variable name.\r\n\r\nHi,friends.\r\n`you are using different variable names for each of your three call`\r\nwhat do you mean? \r\nMy problem is that:\r\nwhen I first call `outputs = self.albert()` , the memory is almost 4G\r\nAnd I second call `question, _ = self.albert()` , the memory increase to almost 8G.\r\nBut the variable `question` should not take up so much memory.", "Agree with @sgugger. Question like this should be asked on https://discuss.huggingface.co\r\n\r\nWe are tying to keep the issues for bug reports and new features/model requests.\r\n\r\nClosing this for now." ]
1,605
1,605
1,605
NONE
null
# ❓ Questions & Help While using `albert-base-v2` to train my model, I got the following problem: ``` #first call outputs = self.albert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds) sequence_output = outputs[0] context_mask = token_type_ids * attention_mask question_mask = ((1 - context_mask) * attention_mask) #second call question, _ = self.albert(input_ids, attention_mask=question_mask, token_type_ids=token_type_ids) #third call context, _ = self.albert(input_ids, attention_mask=context_mask, token_type_ids=token_type_ids) ``` While calling `self.albert()` thrice, the memory it consumes will multiply by 3. So that I must change my batch_size to 4, it's so bad! Is it a BUG or a feature? Even `albert-base` so large?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8441/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8440
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8440/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8440/comments
https://api.github.com/repos/huggingface/transformers/issues/8440/events
https://github.com/huggingface/transformers/pull/8440
739,948,596
MDExOlB1bGxSZXF1ZXN0NTE4NTIwNzk5
8,440
Question template
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "<img width=\"1402\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7353373/98936078-c3bcb600-24e4-11eb-9ae3-6894af553004.png\">\r\n\r\nFor some reason I don't see the \"Question and Help\" option when I try to open an issue @sgugger @LysandreJik ", "Mmm, guess the metadata at the top doesn't like the link." ]
1,605
1,605
1,605
COLLABORATOR
null
# What does this PR do? This PR updates the question template to insist a bit more on users using the forum for questions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8440/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8440", "html_url": "https://github.com/huggingface/transformers/pull/8440", "diff_url": "https://github.com/huggingface/transformers/pull/8440.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8440.patch", "merged_at": 1605020877000 }
https://api.github.com/repos/huggingface/transformers/issues/8439
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8439/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8439/comments
https://api.github.com/repos/huggingface/transformers/issues/8439/events
https://github.com/huggingface/transformers/pull/8439
739,907,146
MDExOlB1bGxSZXF1ZXN0NTE4NDg3MTcz
8,439
Model sharing rst
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merging it now with Julien's offline approval." ]
1,605
1,605
1,605
MEMBER
null
Update the model sharing RST for the new model versioning.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8439/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8439", "html_url": "https://github.com/huggingface/transformers/pull/8439", "diff_url": "https://github.com/huggingface/transformers/pull/8439.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8439.patch", "merged_at": 1605015312000 }
https://api.github.com/repos/huggingface/transformers/issues/8438
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8438/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8438/comments
https://api.github.com/repos/huggingface/transformers/issues/8438/events
https://github.com/huggingface/transformers/issues/8438
739,893,503
MDU6SXNzdWU3Mzk4OTM1MDM=
8,438
login to huggingface forum
{ "login": "srulikbd", "id": 35503583, "node_id": "MDQ6VXNlcjM1NTAzNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/35503583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srulikbd", "html_url": "https://github.com/srulikbd", "followers_url": "https://api.github.com/users/srulikbd/followers", "following_url": "https://api.github.com/users/srulikbd/following{/other_user}", "gists_url": "https://api.github.com/users/srulikbd/gists{/gist_id}", "starred_url": "https://api.github.com/users/srulikbd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srulikbd/subscriptions", "organizations_url": "https://api.github.com/users/srulikbd/orgs", "repos_url": "https://api.github.com/users/srulikbd/repos", "events_url": "https://api.github.com/users/srulikbd/events{/privacy}", "received_events_url": "https://api.github.com/users/srulikbd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe @julien-c or @Pierrci knows!", "can you try again, just in case it was an intermittent issue?", "Wonderful, i login with another email and it works. Thanks." ]
1,605
1,605
1,605
NONE
null
hey. I'm not sure where to post it, so my apologise in advance. I'm trying to login to huggingface forum, but it return me the message: "Unauthorized". what can I do? (I created a user..) thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8438/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8437
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8437/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8437/comments
https://api.github.com/repos/huggingface/transformers/issues/8437/events
https://github.com/huggingface/transformers/pull/8437
739,848,228
MDExOlB1bGxSZXF1ZXN0NTE4NDM4MTQw
8,437
[T5Tokenizer] fix t5 token type ids
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7840 T5 does not use token type ids. Nevertheless, the T5Tokenizer should analogs to RobertaTokenizer return all [0] for the `token_type_ids`. Functions and Tests are added for T5TokenizerFast and T5Tokenizer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8437/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8437", "html_url": "https://github.com/huggingface/transformers/pull/8437", "diff_url": "https://github.com/huggingface/transformers/pull/8437.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8437.patch", "merged_at": 1605036114000 }
https://api.github.com/repos/huggingface/transformers/issues/8436
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8436/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8436/comments
https://api.github.com/repos/huggingface/transformers/issues/8436/events
https://github.com/huggingface/transformers/pull/8436
739,819,087
MDExOlB1bGxSZXF1ZXN0NTE4NDEzMzU2
8,436
Windows dev section in the contributing file
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR adds a section for people who wants to contribute from a Windows environment.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8436/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8436/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8436", "html_url": "https://github.com/huggingface/transformers/pull/8436", "diff_url": "https://github.com/huggingface/transformers/pull/8436.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8436.patch", "merged_at": 1605025157000 }
https://api.github.com/repos/huggingface/transformers/issues/8435
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8435/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8435/comments
https://api.github.com/repos/huggingface/transformers/issues/8435/events
https://github.com/huggingface/transformers/pull/8435
739,815,048
MDExOlB1bGxSZXF1ZXN0NTE4NDA5OTk4
8,435
[T5 Tokenizer] Fix t5 special tokens
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixes #7796 " ]
1,605
1,605
1,605
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #5142, #8109 T5FastToeknzier and T5SlowTokenizer have different behaviors for special tokens as shown in the issue above. This PR fixes the slow T5 tokenizer and adds a test making sure that Fast and Slow tokenizer have the same behavior. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8435/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8435", "html_url": "https://github.com/huggingface/transformers/pull/8435", "diff_url": "https://github.com/huggingface/transformers/pull/8435.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8435.patch", "merged_at": 1605030858000 }
https://api.github.com/repos/huggingface/transformers/issues/8434
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8434/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8434/comments
https://api.github.com/repos/huggingface/transformers/issues/8434/events
https://github.com/huggingface/transformers/pull/8434
739,803,612
MDExOlB1bGxSZXF1ZXN0NTE4NDAwMzg0
8,434
Support serialized tokenizer in AutoTokenizer
{ "login": "gkonstanty", "id": 3730708, "node_id": "MDQ6VXNlcjM3MzA3MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/3730708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkonstanty", "html_url": "https://github.com/gkonstanty", "followers_url": "https://api.github.com/users/gkonstanty/followers", "following_url": "https://api.github.com/users/gkonstanty/following{/other_user}", "gists_url": "https://api.github.com/users/gkonstanty/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkonstanty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkonstanty/subscriptions", "organizations_url": "https://api.github.com/users/gkonstanty/orgs", "repos_url": "https://api.github.com/users/gkonstanty/repos", "events_url": "https://api.github.com/users/gkonstanty/events{/privacy}", "received_events_url": "https://api.github.com/users/gkonstanty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm having an issue that could be fixed by this PR. I trained a BPETokinizer using the Tokenizers library, uploaded the tokenizer generated JSON file to [my HF repository](https://huggingface.co/jonatasgrosman/bartuque-bart-large-mefmt/blob/main/vocab.json) and this command:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nAutoTokenizer.from_pretrained(\"jonatasgrosman/bartuque-bart-large-mefmt\")\r\n```\r\n\r\n... Results on this error:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py\", line 385, in from_pretrained\r\n return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 1769, in from_pretrained\r\n resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 1787, in _from_pretrained\r\n **(copy.deepcopy(kwargs)),\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 1841, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/models/roberta/tokenization_roberta.py\", line 171, in __init__\r\n **kwargs,\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2.py\", line 178, in __init__\r\n self.decoder = {v: k for k, v in self.encoder.items()}\r\n File \"/Users/jonatas/projects/github/bartuque/venv/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2.py\", line 178, in <dictcomp>\r\n self.decoder = {v: k for k, v in self.encoder.items()}\r\nTypeError: unhashable type: 'list'\r\n```\r\n\r\n I hope this PR will be merged soon.\r\n", "Just an update... after renaming my tokenizer file from `vocab.json` to `tokenizer.json`, the AutoTokenizer stop to crash and is working well now.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,605
1,619
1,619
NONE
null
# What does this PR do? Addresses issue: https://github.com/huggingface/transformers/issues/7293 With these changes, the `AutoTokenizer.from_pretrained()` supports also loading a tokenizer, which was saved with [🤗 Tokenizers](https://github.com/huggingface/tokenizers) library. Example: ```python from tokenizers import CharBPETokenizer tokenizer = CharBPETokenizer() tokenizer.save('./char-bpe.json') from transformers import AutoTokenizer my_tokenizer = AutoTokenizer.from_pretrained('./char-bpe.json', use_fast=True) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Also: @mfuntowicz (tokenizers), @thomwolf (due to https://github.com/huggingface/transformers/pull/7659)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8434/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8434/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8434", "html_url": "https://github.com/huggingface/transformers/pull/8434", "diff_url": "https://github.com/huggingface/transformers/pull/8434.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8434.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8433
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8433/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8433/comments
https://api.github.com/repos/huggingface/transformers/issues/8433/events
https://github.com/huggingface/transformers/pull/8433
739,778,809
MDExOlB1bGxSZXF1ZXN0NTE4Mzc5NjA1
8,433
Replaced unnecessary iadd operations on lists in tokenization_utils.py with proper list methods
{ "login": "bombs-kim", "id": 11001573, "node_id": "MDQ6VXNlcjExMDAxNTcz", "avatar_url": "https://avatars.githubusercontent.com/u/11001573?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bombs-kim", "html_url": "https://github.com/bombs-kim", "followers_url": "https://api.github.com/users/bombs-kim/followers", "following_url": "https://api.github.com/users/bombs-kim/following{/other_user}", "gists_url": "https://api.github.com/users/bombs-kim/gists{/gist_id}", "starred_url": "https://api.github.com/users/bombs-kim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bombs-kim/subscriptions", "organizations_url": "https://api.github.com/users/bombs-kim/orgs", "repos_url": "https://api.github.com/users/bombs-kim/repos", "events_url": "https://api.github.com/users/bombs-kim/events{/privacy}", "received_events_url": "https://api.github.com/users/bombs-kim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the quick reviews both of you!\r\n\r\n@LysandreJik \r\nI just searched the whole code and I found many more list append operations than updates through `__iadd__`. So I actually think that for consistency over the whole project scope, it may be better to use list append whenever it's applicable. If you approve, I would like to fix the other similar cases, too.\r\n\r\nRegarding tuples, the current style seems perfectly okay. Tuples are immutable and `__iadd__` does not update existing tuple but replace it with a newly created one, i.e., `tup += ('a',)` is equivalent to `tup = tup + ('a',)`. So they should be treated differently from lists.\r\n\r\nHere is a demonstration that shows the different effects of `__iadd__` on lists and tuples.\r\n```\r\nl = ['a']\r\nprint(id(l)) # 4327718256\r\nl += ['b']\r\nprint(id(l)) # 4327718256\r\n\r\n\r\ntup = ('a',)\r\nprint(id(tup)) # 4326480784\r\ntup += ('b',)\r\nprint(id(l)) # 4327718256 (ID changed!)\r\n```" ]
1,605
1,605
1,605
CONTRIBUTOR
null
# Replaced unnecessary iadd operations on lists in tokenization_utils.py with proper list methods @mfuntowicz Previously, unnecessarily many list objects are created because of list updates through iadd operations. This is bad for the following reasons. * It slows down the program. * It's a substandard style. Regarding the slowing down, please see the following snippets. ``` l = [] for i in range(10**6): l.append(i) # Takes 0.13282 seconds, on average, on my machine ``` ``` l = [] for i in range(10**6): l += [i] # this creates a new list [i] every iteration # Takes 0.14698 seconds, on average, on my machine ``` The previous style is considered bad since it's confusing. It is easy to think that `l += [i]` has the same semantics as `l = l + [i]`, which is not at all the case. To see this, run the following code. ``` l = [] for i in range(10**6): l = l + [i] # This replaces the existing list with a new list (l + [i]) every iteration ``` The fact that the existing list is mutated is more clearly expressed in the new code, and, to my best knowledge, all the python standard library code prefer the style of the new code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8433/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8433", "html_url": "https://github.com/huggingface/transformers/pull/8433", "diff_url": "https://github.com/huggingface/transformers/pull/8433.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8433.patch", "merged_at": 1605115798000 }
https://api.github.com/repos/huggingface/transformers/issues/8432
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8432/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8432/comments
https://api.github.com/repos/huggingface/transformers/issues/8432/events
https://github.com/huggingface/transformers/pull/8432
739,748,175
MDExOlB1bGxSZXF1ZXN0NTE4MzU0NzQx
8,432
Add auto next sentence prediction
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,605
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This PR adds auto models for the next sentence prediction task.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8432/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8432", "html_url": "https://github.com/huggingface/transformers/pull/8432", "diff_url": "https://github.com/huggingface/transformers/pull/8432.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8432.patch", "merged_at": 1605024709000 }
https://api.github.com/repos/huggingface/transformers/issues/8431
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8431/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8431/comments
https://api.github.com/repos/huggingface/transformers/issues/8431/events
https://github.com/huggingface/transformers/issues/8431
739,716,272
MDU6SXNzdWU3Mzk3MTYyNzI=
8,431
Get Scores for each NE Label
{ "login": "Stimmot", "id": 29411999, "node_id": "MDQ6VXNlcjI5NDExOTk5", "avatar_url": "https://avatars.githubusercontent.com/u/29411999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Stimmot", "html_url": "https://github.com/Stimmot", "followers_url": "https://api.github.com/users/Stimmot/followers", "following_url": "https://api.github.com/users/Stimmot/following{/other_user}", "gists_url": "https://api.github.com/users/Stimmot/gists{/gist_id}", "starred_url": "https://api.github.com/users/Stimmot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Stimmot/subscriptions", "organizations_url": "https://api.github.com/users/Stimmot/orgs", "repos_url": "https://api.github.com/users/Stimmot/repos", "events_url": "https://api.github.com/users/Stimmot/events{/privacy}", "received_events_url": "https://api.github.com/users/Stimmot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe @sgugger or @jplu have an idea", "`seqeval` can return you more information, for instance the version in the Datasets library returns all the metrics per label IIRC. The code is [here](https://github.com/huggingface/datasets/blob/8005fed0887236804a07bfdc7dc69298e15dac7c/metrics/seqeval/seqeval.py#L96), you just need to adapt what's inside the `compute_metrics` function to fit your needs.", "Hi @sgugger - that did the trick, thank you!" ]
1,604
1,605
1,605
NONE
null
I'm running the run_ner.py script to use the bert-base-german-cased transformer model in the token classification task to train it on custom NE labels and make predictions for German documents. I have 11 labels in total. I wondered if there is any way to get prediction results (meaning loss, accuracy, precision, etc.) not only for the whole task, but for each label individually. This would make it easier to compare the real perfomance of the model. So that each label has results like: ``` eval_loss = 0.07476427406072617 eval_accuracy_score = 0.9818217086485438 eval_precision = 0.6756756756756757 eval_recall = 0.676378772112383 eval_f1 = 0.6760270410816434 ``` Reason I'm asking: the O-Labels (the only thing I kept from BIO tagging) are of course the majority of all labels, therefore the accuracy is quite high as the model correctly predicts most of them, but as a consequence the scores for the real labels get lost in the statistical noise. Is there any way to achieve this? Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8431/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8430
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8430/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8430/comments
https://api.github.com/repos/huggingface/transformers/issues/8430/events
https://github.com/huggingface/transformers/issues/8430
739,531,342
MDU6SXNzdWU3Mzk1MzEzNDI=
8,430
RAG: Explanation on Retriever Variables.
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Hey @shamanez - could you please forward this question to the forum: https://discuss.huggingface.co/ . We try to keep the issues for bug reports." ]
1,604
1,605
1,605
CONTRIBUTOR
null
Can you please explain what these terms in the RAG retrieval mean? 1. config.index_name 2. config.index_path
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8430/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8429
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8429/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8429/comments
https://api.github.com/repos/huggingface/transformers/issues/8429/events
https://github.com/huggingface/transformers/pull/8429
739,529,059
MDExOlB1bGxSZXF1ZXN0NTE4MTczNTUw
8,429
[examples] better PL version check
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
`pkg_resources.require(f"{pkg}>={min_ver}")` does a great job of checking the minimal required versions at runtime, but I wasn't aware that it checks that the dependencies meet their requirements too! So I started getting a false alarm about needing `pytorch-lightning=1.0.4` when I already had a higher version. The problem was in: ``` $ python -c 'import pkg_resources; pkg_resources.require("pytorch_lightning>=1.0.4")' Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pkg_resources/__init__.py", line 884, in require needed = self.resolve(parse_requirements(requirements)) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pkg_resources/__init__.py", line 775, in resolve raise VersionConflict(dist, req).with_context(dependent_req) pkg_resources.ContextualVersionConflict: (torch 1.8.0.dev20201106+cu110 (/mnt/nvme1/anaconda3/envs/main-38/lib/python3.8/site-packages), Requirement.parse('torch<1.8,>=1.3'), {'pytorch-lightning'}) ``` Long story short, currently PL explicitly excludes pytorch-1.8 in its dependency list: https://github.com/PyTorchLightning/pytorch-lightning/issues/4596 - which leads to this problem. When I upgrade PL pip uninstalls `pytorch-1.8` - thanks, but no thanks - rtx-3090 doesn't work with pytorch < 1.8. So I install it back and now I get the failure above. Except in the current code it's masked by the `try/except` block which hides the actual problem. So this is not good. This PR rewrites the check in a way that doesn't check whether the dependencies of the package in questions are in order, and only checks that the minimal version is correct. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8429/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8429", "html_url": "https://github.com/huggingface/transformers/pull/8429", "diff_url": "https://github.com/huggingface/transformers/pull/8429.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8429.patch", "merged_at": 1605018804000 }
https://api.github.com/repos/huggingface/transformers/issues/8428
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8428/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8428/comments
https://api.github.com/repos/huggingface/transformers/issues/8428/events
https://github.com/huggingface/transformers/pull/8428
739,508,433
MDExOlB1bGxSZXF1ZXN0NTE4MTU3MzAy
8,428
Add missing tasks to `pipeline` docstring
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
I added missing tasks to `pipeline` docstring. Also, I fixed some typos I found.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8428/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8428", "html_url": "https://github.com/huggingface/transformers/pull/8428", "diff_url": "https://github.com/huggingface/transformers/pull/8428.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8428.patch", "merged_at": 1605033866000 }
https://api.github.com/repos/huggingface/transformers/issues/8427
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8427/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8427/comments
https://api.github.com/repos/huggingface/transformers/issues/8427/events
https://github.com/huggingface/transformers/issues/8427
739,364,906
MDU6SXNzdWU3MzkzNjQ5MDY=
8,427
Set num_beams=4 for all Helsinki-NLP models
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I can do the change! Will wait until the new git model hub is merged and then apply it :-) ", "Thanks!\n", "Should be decently easy to do it @patrickvonplaten let me know when you get to it :)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@julien-c, @jorgtied - sorry forgot about this issue...changing `num_beams=4` for all opus models now." ]
1,604
1,610
1,610
CONTRIBUTOR
null
Currently it is 6. Empirically, I tested 77 random models and num_beams=4 was about 50% faster and on average slightly higher BLEU (22.5 vs 22.4). We also have @jorgtied 's approval for the change. On slack, he wrote > Again, no systematic evaluation - more like a feeling. I had the impression that 1 or 2 is worse and I didn’t want to set 10 or 12 that I have seen otherwise, because it may slow down things quite substantially. If you make some more tests then let me know what you will find … Thanks! There are about 1300 affected model's, so this feels like the type of thing @patrickvonplaten 's script could do well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8427/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8426
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8426/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8426/comments
https://api.github.com/repos/huggingface/transformers/issues/8426/events
https://github.com/huggingface/transformers/issues/8426
739,308,783
MDU6SXNzdWU3MzkzMDg3ODM=
8,426
Wrong files names in model list for "xlm-roberta-large-finetuned-conll03-german"
{ "login": "padmalcom", "id": 3961950, "node_id": "MDQ6VXNlcjM5NjE5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/padmalcom", "html_url": "https://github.com/padmalcom", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "repos_url": "https://api.github.com/users/padmalcom/repos", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, what do you mean they're incorrect? Are you having issues loading the files in your `transformers` objects?", "Hi, click on [https://huggingface.co/xlm-roberta-large-finetuned-conll03-german](https://huggingface.co/xlm-roberta-large-finetuned-conll03-german) and then on _List all files in model_ and move the mouse over each link. You will see that the files have a prefix (namely the name of the model). This is not allowed by the transformers api, since it expects to find e.g. config.json instead of xlm-roberta-large-finetuned-conll03-german-config.json. Hope this explaination helps.", "~I cannot see the prefix in the link you've given, and~ I can correctly load the models in the library:\r\n\r\n```py\r\nfrom transformers import XLMRobertaModel\r\nXLMRobertaModel.from_pretrained(\"xlm-roberta-large-finetuned-conll03-german\")\r\n```\r\nworks correctly.\r\n\r\nThe legacy models (e.g. `bert-base-cased`, this one too) had the prefix, but we've changed the approach since and only keep them that way for backwards compatibility.", "I can confirm this is intended", "Okay, I only experience this error when downloading the files and load from local storage instead of providing a name. But when this is working as intended, it is fine for me. " ]
1,604
1,604
1,604
NONE
null
Hi, the names of the model files for "xlm-roberta-large-finetuned-conll03-german" are incorrect and have to model name as prefix. Example: xlm-roberta-large-finetuned-conll03-german-config.json instead of config.json
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8426/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8425
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8425/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8425/comments
https://api.github.com/repos/huggingface/transformers/issues/8425/events
https://github.com/huggingface/transformers/pull/8425
739,306,488
MDExOlB1bGxSZXF1ZXN0NTE3OTkyMjYz
8,425
Check all models are in an auto class
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? Following up from @patrickvonplaten fixes, this PR adds a script to check all models are in an auto class. This way we will get a CI error if a newly added model ends up forgotten :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8425/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8425", "html_url": "https://github.com/huggingface/transformers/pull/8425", "diff_url": "https://github.com/huggingface/transformers/pull/8425.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8425.patch", "merged_at": 1604954695000 }
https://api.github.com/repos/huggingface/transformers/issues/8424
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8424/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8424/comments
https://api.github.com/repos/huggingface/transformers/issues/8424/events
https://github.com/huggingface/transformers/issues/8424
739,298,069
MDU6SXNzdWU3MzkyOTgwNjk=
8,424
Electra multi-gpu pretraining.
{ "login": "652994331", "id": 51428350, "node_id": "MDQ6VXNlcjUxNDI4MzUw", "avatar_url": "https://avatars.githubusercontent.com/u/51428350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/652994331", "html_url": "https://github.com/652994331", "followers_url": "https://api.github.com/users/652994331/followers", "following_url": "https://api.github.com/users/652994331/following{/other_user}", "gists_url": "https://api.github.com/users/652994331/gists{/gist_id}", "starred_url": "https://api.github.com/users/652994331/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/652994331/subscriptions", "organizations_url": "https://api.github.com/users/652994331/orgs", "repos_url": "https://api.github.com/users/652994331/repos", "events_url": "https://api.github.com/users/652994331/events{/privacy}", "received_events_url": "https://api.github.com/users/652994331/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "That would depend of the training script. All of our models are `nn.Module`s so all can be trained on multi-GPU, it just depends on the script. What script are you using?", "@LysandreJik Hi, thanks for your reply, i am not pretty sure about script, what I am doing now is using TensorFlow google Electra, but it seems I haven't figure out how to use multi gpu in TensorFlow, that's why i am here, Do u have any advice, I mean maybe I can just use Electra in this project to achieve multi-gpu pretraining?", "Yes, you could check [this thread](https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004) and use their project to train ELECTRA, which is based on this repository.", "@LysandreJik thanks, however, according to this issue https://github.com/richarddwang/electra_pytorch/issues/5 , I guess they haven't figure out how to use multigpu yet,", "Ah, then I don't think I can help you further, unfortunately.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi, I am pretraining the Electra model with my own data, for now, I am pretraining using one GPU in my machine. Can we use multi GPUs to pretrain Electra? Thanks for your reply
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8424/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8423
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8423/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8423/comments
https://api.github.com/repos/huggingface/transformers/issues/8423/events
https://github.com/huggingface/transformers/pull/8423
739,245,173
MDExOlB1bGxSZXF1ZXN0NTE3OTQxNzAw
8,423
Fix bart shape comment
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
fixes #8384 Before transpose , shape of x and encoder_hidden_states are both (BS, seq_len, model_dim) to me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8423/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8423", "html_url": "https://github.com/huggingface/transformers/pull/8423", "diff_url": "https://github.com/huggingface/transformers/pull/8423.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8423.patch", "merged_at": 1604946333000 }
https://api.github.com/repos/huggingface/transformers/issues/8422
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8422/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8422/comments
https://api.github.com/repos/huggingface/transformers/issues/8422/events
https://github.com/huggingface/transformers/pull/8422
739,241,861
MDExOlB1bGxSZXF1ZXN0NTE3OTM4OTk5
8,422
[docs] [testing] gpu decorators table
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks Stas!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR adds a table of gpu requirement decorators that perhaps is easier to grasp quickly in addition to the prose version. (based on the discussion [here](https://github.com/huggingface/transformers/pull/8341#issuecomment-723705104)) @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8422/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8422", "html_url": "https://github.com/huggingface/transformers/pull/8422", "diff_url": "https://github.com/huggingface/transformers/pull/8422.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8422.patch", "merged_at": 1604950063000 }