url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/8121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8121/comments | https://api.github.com/repos/huggingface/transformers/issues/8121/events | https://github.com/huggingface/transformers/pull/8121 | 731,584,338 | MDExOlB1bGxSZXF1ZXN0NTExNjY1MDg4 | 8,121 | fix(trainer_callback]: typo | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a ton!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Fix a typo in `trainer_callback`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8121/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8121",
"html_url": "https://github.com/huggingface/transformers/pull/8121",
"diff_url": "https://github.com/huggingface/transformers/pull/8121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8121.patch",
"merged_at": 1603901731000
} |
https://api.github.com/repos/huggingface/transformers/issues/8120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8120/comments | https://api.github.com/repos/huggingface/transformers/issues/8120/events | https://github.com/huggingface/transformers/pull/8120 | 731,522,856 | MDExOlB1bGxSZXF1ZXN0NTExNjEzNzE2 | 8,120 | Rename add_start_docstrings_to_callable | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR rename all `add_start_docstrings_to_callable` to a more explicit `add_start_docstrings_to_model_forward`. This should avoid confusion on the use of this decorator.
(It's an internal function so there should be no breaking change.) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8120/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8120",
"html_url": "https://github.com/huggingface/transformers/pull/8120",
"diff_url": "https://github.com/huggingface/transformers/pull/8120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8120.patch",
"merged_at": 1603906952000
} |
https://api.github.com/repos/huggingface/transformers/issues/8119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8119/comments | https://api.github.com/repos/huggingface/transformers/issues/8119/events | https://github.com/huggingface/transformers/pull/8119 | 731,509,572 | MDExOlB1bGxSZXF1ZXN0NTExNjAyNzA0 | 8,119 | feat(wandb): save model as artifact | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok I submitted #8121 for the typo in `_new_step`.\r\n\r\nAs for this PR, I'm thinking the logic should be:\r\n* use folder referenced by the last item returned from `_sorted_checkpoints`\r\n* in case it's empty, we should probably save the current checkpoint locally and upload it (since we specifically requested an upload to wandb)",
"After experimenting a bit more:\r\n\r\n1. Should we upload a model only if `_sorted_checkpoints(…)` is non-empty?\r\n * we don't necessarily get the last model (eg save every 100 steps with 520 steps total)\r\n\r\n2. Should we just save current state model at end of training in `args.output_dir + \"\\wandb\"`\r\n * we need to have access to `Trainer.save_model` from `WandbCallback`\r\n * we could decide to use `state.best_model_checkpoint` when present instead\r\n * we ignore any checkpoint\r\n",
"@sgugger Do you want me to make an attempt at giving access to the `Trainer` from callbacks or is it a pattern you want to avoid?",
"Hi @borisdayma, sorry I took a bit of time to reply on this, we were waiting for the new version of the model hub to materialize before moving forward on this.\r\n\r\nSo! The callbacks aren't 2-way in Transformers because then you have to be very careful about the order of their execution. Here the design was to just allow for callbacks that can read the state, not write, and for any piece of code that needs the write access, users should subclass the `Trainer`. The circular reference is also problematic for memory management so we leave 2-way callbacks for libraries focused on training models, and keep our simple reporting callbacks as they are :-)\r\n\r\nLike you said, you have access to the state with `best_model_checkpoint`. You can also unpack the model from the kwargs and access it. What is in the `Trainer.save_model` method that you need? Worst case scenario, you can even isntantiate an empty Trainer with just the model and the training arguments, and use its `save_model` method.",
"The issue with `best_model_checkpoint` is that it does not exist if there's no measurement metric set.\r\nIt could make sense to define it as the last checkpoint in that case.\r\n\r\nThe next issues would then be:\r\n* sometimes no model has been saved yet (maybe not enough epochs) while we want to log the model -> we could accept that it's an issue on the user side and give a warning\r\n* sometimes we may log every 100 steps and run for 180 steps. The last checkpoint is a bit old -> on this aspect I feel like the `Trainer` should automatically save the final step as a checkpoint\r\n\r\nWhat do you think?\r\n\r\nThe alternative would be to completely ignore that logic, let wandb save a model somewhere and upload it. I had not realized we could have access to `model` from the callback (though saving from `Trainer` is better as it handles TPU, save tokenizer, args and may also change in the future).",
"I think the most logical is to save the final model, the intermediate checkpoints are there to resume training is something went wrong, or load the best model at the end (which is done before the `on_train_end` event). That's also why we don't always save the model at the end of training, leaving that part to the user in a script.\r\n\r\nIf you use the logic of unpacking the model from the kwargs, you can simply create a new `Trainer` with it which can then save it easily with the `Trainer.save_model` method. Normally the model you unpack is the reference to the real model, so you won't have a `DistributedDataParallel` or something like that, and everything should work smoothly.\r\n",
"Finally getting closer!\r\n\r\nFew notes:\r\n* I import Trainer inside my function to avoid circular reference\r\n* I need to find a way to see if I need `Trainer` or `TfTrainer`, should I infer it through `type(model)`\r\n* I use `Trainer.state` as model metadata but maybe it's not that useful. The artifact is associated to a run which already has all the config parameters but it could be useful to relog it, or maybe I should just log the final metrics instead (that I can get through wandb)",
"I think it's becoming pretty cool. Here is [an artifact](https://wandb.ai/borisd13/huggingface/artifacts/model/run-test-clm/2fc1855753d610244974) logged with this method.\r\n\r\nThe current limitation is that it only works with Pytorch for now.\r\nAre there any plan for more synergy between `Trainer` or `TfTrainer` or should they be considered independently?",
"`TFTrainer` will be reworked in the near future and be a simple wrap around the Keras fit method (and the callbacks will be regular Keras callbacks).",
"I adjusted the metadata when we use `load_best_model_at_end`.\r\nIn that case we don't want to log the last metrics but only flos and best metric.",
"Small change:\r\n* force commit of last step\r\n* more robust way to get metadata, it will consider any data that has been logged and is a number\r\n\r\nI'm now ready on my side. Feel free to ping me!",
"@LysandreJik let me know if you have any comments",
"Happy new year everybody!\r\nSince it has already been a while since this PR was made, am I supposed to merge master and verify the tests are passing?\r\nLet me know if I need to do anything on my side."
] | 1,603 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**EDIT**
The logic has been simplified.
Model is just saved to a temporary folder and uploaded as artifact at the end of training.
**ORIGINAL message**
Save trained model as artifact.
A few different possibilities:
* log model at `on_save` callback -> the issue is there could quickly be too many checkpoints to upload, high bandwidth…
* log model at `on_train_end`
* when we have access to `state.best_model_checkpoint`, we should just upload that folder
* we could upload entire `output_dir` but it could be very large (same problem as `on_save`, ideally we only upload one model
* we can save last model in a separate folder and upload it -> issue is that we don't have access to `Trainer.save_model` (where are the 2-way callbacks 😉)
* we could just use `_sorted_checkpoints` and log only last element (which would also be the best model when metrics are given)
I'm thinking I should actually go with the last option (use of `_sorted_checkpoints`). What do you think?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Trainer: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8119",
"html_url": "https://github.com/huggingface/transformers/pull/8119",
"diff_url": "https://github.com/huggingface/transformers/pull/8119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8119.patch",
"merged_at": 1609835447000
} |
https://api.github.com/repos/huggingface/transformers/issues/8118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8118/comments | https://api.github.com/repos/huggingface/transformers/issues/8118/events | https://github.com/huggingface/transformers/pull/8118 | 731,506,831 | MDExOlB1bGxSZXF1ZXN0NTExNjAwMzk2 | 8,118 | Document the various LM Auto models | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR adds the documentation for the three classes of mdoels for LM (`AutoModelForCausalLM`, `AutoModelForMaskedLM` and `AutoModelForSeq2SeqLM`) and their TF equivalent. It also removes the documentation of `AutoModelWithLMHead` which is deprecated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8118/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8118",
"html_url": "https://github.com/huggingface/transformers/pull/8118",
"diff_url": "https://github.com/huggingface/transformers/pull/8118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8118.patch",
"merged_at": 1603906917000
} |
https://api.github.com/repos/huggingface/transformers/issues/8117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8117/comments | https://api.github.com/repos/huggingface/transformers/issues/8117/events | https://github.com/huggingface/transformers/issues/8117 | 731,374,003 | MDU6SXNzdWU3MzEzNzQwMDM= | 8,117 | fast tokenizer issue on most user uploaded models | {
"login": "pommedeterresautee",
"id": 1029874,
"node_id": "MDQ6VXNlcjEwMjk4NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pommedeterresautee",
"html_url": "https://github.com/pommedeterresautee",
"followers_url": "https://api.github.com/users/pommedeterresautee/followers",
"following_url": "https://api.github.com/users/pommedeterresautee/following{/other_user}",
"gists_url": "https://api.github.com/users/pommedeterresautee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pommedeterresautee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pommedeterresautee/subscriptions",
"organizations_url": "https://api.github.com/users/pommedeterresautee/orgs",
"repos_url": "https://api.github.com/users/pommedeterresautee/repos",
"events_url": "https://api.github.com/users/pommedeterresautee/events{/privacy}",
"received_events_url": "https://api.github.com/users/pommedeterresautee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
},
{
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, we need to remove all the hardcoded configuration values of tokenizers in the transformers source code, and upload `tokenizer_config.json` files for all those models.\r\n\r\nAlso cc @n1t0 ",
"Very strange, making some tests, the Rust implem is much slower than the Python one...\r\n\r\nMeasure done on my Mac (i7)\r\n\r\n```python\r\nimport time\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\ntext = \"\"\"\r\nIl se déduit des arrêts de la Cour de justice de l’Union européenne du 27 avril 2017 (A-Rosa Flussschiff GmbH, n° C-620/15) et du 6 février 2018 (Ömer Altun, n° C-359/16) que le juge, lorsqu’il est saisi de poursuites pénales du chef de travail dissimulé, pour défaut de déclarations aux organismes de protection sociale, et que la personne poursuivie produit des certificats E101, devenus A1, à l’égard des travailleurs concernés, délivrés au titre de l’article 14, paragraphe 2, sous a, du règlement n° 1408/71, ne peut, à l’issue du débat contradictoire, écarter lesdits certificats que si, sur la base de l’examen des éléments concrets recueillis au cours de l’enquête judiciaire ayant permis de constater que ces certificats avaient été obtenus ou invoqués frauduleusement et que l’institution émettrice saisie s’était abstenue de prendre en compte, dans un délai raisonnable, il caractérise une fraude constituée, dans son élément objectif, par l’absence de respect des conditions prévues à la disposition précitée et, dans son élément subjectif, par l’intention de la personne poursuivie de contourner ou d’éluder les conditions de délivrance dudit certificat pour obtenir l’avantage qui y est attaché.\r\nDoit ainsi être cassé l’arrêt de la cour d’appel qui écarte les certificats E101 sans avoir, au préalable, recherché si l’institution émettrice desdits certificats avait été saisie d’une demande de réexamen et de retrait de ceux-ci sur la base des éléments concrets recueillis dans le cadre de l’enquête judiciaire permettant, le cas échéant, de constater que ces certificats avaient été obtenus ou invoqués de manière frauduleuse et que l’institution émettrice s’était abstenue, dans un délai raisonnable, de les prendre en considération aux fins de réexamen du bien-fondé de la délivrance desdits certificats, et dans l’affirmative, sans établir, sur la base de l’examen des éléments concrets et dans le respect des garanties inhérentes au droit à un procès équitable, l’existence d’une fraude de la part de la société poursuivie, constituée, dans son élément matériel, par le défaut, dans les faits de la cause, des conditions prévues à l’article 14, paragraphe 2, sous a, précité aux fins d’obtention ou d’invocation des certificats E101 en cause et, dans son élément moral, par l’intention de ladite société de contourner ou d’éluder les conditions de délivrance dudit certificat pour obtenir l’avantage qui y est attaché (arrêt n° 1, pourvoi 13-88.631, arrêt n° 2, pourvoi 13-88.632 et arrêt n° 3, pourvoi n° 15-80.735).\r\nEn revanche, prononce par des motifs conformes à la doctrine de la Cour de l’Union européenne précitée, la cour d’appel qui, pour relaxer les prévenues, sociétés d’aviation civile, énonce que l’enquête n’ a pas permis de constater les éléments de fraude et s’abstient, en conséquence, d’opérer une vérification relative aux certificats E101 produits par elles (arrêt n° 4, pourvoi n° 1581316).\r\n\"\"\"\r\n\r\nfast = False\r\nrepeat = 1000\r\n\r\n# use_fast\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=\"camembert/camembert-base-ccnet\", use_fast=fast)\r\n\r\n_ = tokenizer(text)\r\nstart = time.time()\r\nfor _ in range(repeat):\r\n _ = tokenizer(text)\r\nprint(\"ccnet new\", time.time() - start)\r\n\r\n# CCNET Camembert saved few months ago\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=\"output/model\", use_fast=fast)\r\n\r\n_ = tokenizer(text)\r\nstart = time.time()\r\nfor _ in range(repeat):\r\n _ = tokenizer(text)\r\nprint(\"ccnet old\", time.time() - start)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=\"camembert-base\", use_fast=fast)\r\n\r\n_ = tokenizer(text)\r\nstart = time.time()\r\nfor _ in range(repeat):\r\n _ = tokenizer(text)\r\nprint(\"camembert base\", time.time() - start)\r\n\r\n\r\n```\r\n\r\nfast = False\r\n```\r\nwandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.\r\nccnet new 2.104267120361328\r\nccnet old 2.3693552017211914\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (684 > 512). Running this sequence through the model will result in indexing errors\r\ncamembert base 2.245959997177124\r\n\r\nProcess finished with exit code 0\r\n\r\n```\r\n\r\n\r\nfast = True\r\n```\r\nwandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.\r\nccnet new 2.7245991230010986\r\nccnet old 2.7714219093322754\r\ncamembert base 2.9007809162139893\r\n```\r\n\r\nIt appears that fast tokenizer... is much slower than Python implementation on a Mac (measures not done a Linux machine).",
"Thank you for reporting this @pommedeterresautee. \r\n\r\nI think this is expected though: You are comparing a tokenizer that is based on SentencePiece (c++) with one in Rust. Our rust implementation is a bit slower than the SentencePiece when encoding a single sentence, but as soon as you are starting to encode batches with padding and post-processing, it gets faster!",
"Thank you, I didn't understood that batching was a thing during tokenization too!\r\n\r\n```python\r\nimport time\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\ntext = \"\"\"\r\nIl se déduit des arrêts de la Cour de justice de l’Union européenne du 27 avril 2017 (A-Rosa Flussschiff GmbH, n° C-620/15) et du 6 février 2018 (Ömer Altun, n° C-359/16) que le juge, lorsqu’il est saisi de poursuites pénales du chef de travail dissimulé, pour défaut de déclarations aux organismes de protection sociale, et que la personne poursuivie produit des certificats E101, devenus A1, à l’égard des travailleurs concernés, délivrés au titre de l’article 14, paragraphe 2, sous a, du règlement n° 1408/71, ne peut, à l’issue du débat contradictoire, écarter lesdits certificats que si, sur la base de l’examen des éléments concrets recueillis au cours de l’enquête judiciaire ayant permis de constater que ces certificats avaient été obtenus ou invoqués frauduleusement et que l’institution émettrice saisie s’était abstenue de prendre en compte, dans un délai raisonnable, il caractérise une fraude constituée, dans son élément objectif, par l’absence de respect des conditions prévues à la disposition précitée et, dans son élément subjectif, par l’intention de la personne poursuivie de contourner ou d’éluder les conditions de délivrance dudit certificat pour obtenir l’avantage qui y est attaché.\r\nDoit ainsi être cassé l’arrêt de la cour d’appel qui écarte les certificats E101 sans avoir, au préalable, recherché si l’institution émettrice desdits certificats avait été saisie d’une demande de réexamen et de retrait de ceux-ci sur la base des éléments concrets recueillis dans le cadre de l’enquête judiciaire permettant, le cas échéant, de constater que ces certificats avaient été obtenus ou invoqués de manière frauduleuse et que l’institution émettrice s’était abstenue, dans un délai raisonnable, de les prendre en considération aux fins de réexamen du bien-fondé de la délivrance desdits certificats, et dans l’affirmative, sans établir, sur la base de l’examen des éléments concrets et dans le respect des garanties inhérentes au droit à un procès équitable, l’existence d’une fraude de la part de la société poursuivie, constituée, dans son élément matériel, par le défaut, dans les faits de la cause, des conditions prévues à l’article 14, paragraphe 2, sous a, précité aux fins d’obtention ou d’invocation des certificats E101 en cause et, dans son élément moral, par l’intention de ladite société de contourner ou d’éluder les conditions de délivrance dudit certificat pour obtenir l’avantage qui y est attaché (arrêt n° 1, pourvoi 13-88.631, arrêt n° 2, pourvoi 13-88.632 et arrêt n° 3, pourvoi n° 15-80.735).\r\nEn revanche, prononce par des motifs conformes à la doctrine de la Cour de l’Union européenne précitée, la cour d’appel qui, pour relaxer les prévenues, sociétés d’aviation civile, énonce que l’enquête n’ a pas permis de constater les éléments de fraude et s’abstient, en conséquence, d’opérer une vérification relative aux certificats E101 produits par elles (arrêt n° 4, pourvoi n° 1581316).\r\n\"\"\"\r\n\r\n\r\nrepeat = 1000\r\n\r\n# use_fast\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=\"camembert/camembert-base-ccnet\", use_fast=True)\r\n\r\n_ = tokenizer(text)\r\nstart = time.time()\r\nfor _ in range(repeat):\r\n _ = tokenizer([text] * 10)\r\nprint(\"fast\", time.time() - start)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=\"camembert/camembert-base-ccnet\", use_fast=False)\r\n\r\n_ = tokenizer(text)\r\nstart = time.time()\r\nfor _ in range(repeat):\r\n _ = tokenizer([text] * 10)\r\nprint(\"slow\", time.time() - start)\r\n\r\n```\r\n\r\nProduces\r\n```\r\nwandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.\r\nfast 16.272130966186523\r\nslow 22.52426290512085\r\n```\r\n... as expected!",
"@pommedeterresautee \r\n\r\nHi, I am not sure it's a `fast` tokenizers bug but maybe more a property that was (maybe unintentionnally) dropped from Tokenizers.\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"camembert/camembert-base-ccnet\", use_fast=False)\r\ntokenizer.model_max_length\r\n# Out[4]: 1000000000000000019884624838656\r\n```\r\n\r\nCan you tell us what's the actual bug for you in the end ? Just to make sure the fix I am working on will actually work as generally as possible",
"I think you are right, it's more about a dropped property in the config file or a change in the source code than a bug specific to the fast tokenizer.\r\nI discovered the issue because I was comparing the model + tokenizer as exported few months ago with the fast tokenizer of today and thought it was because of the fast tokenizer. My \"old\" export returns me 512 when I call `max_len`.\r\n\r\nStill it's not returning the correct value, fast tokenizer or not.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, this issue is still not resolved; i have the same problem with `camembert/camembert-large`"
] | 1,603 | 1,687 | 1,610 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.8.0-25-generic-x86_64-with-glibc2.32
- Python version: 3.8.6
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@mfuntowicz @julien-c
## Information
Found the bug on `camembert/camembert-base-ccnet` but probably common to many models uploaded by users.
On camembert base model, it works out of the box (there is no bug).
## To reproduce
Since `tokenizer` 0.9, it's possible to load the many unigram based tokenizers with the fast Rust implementation.
It appears that the file `tokenizer_config.json` of some of them is not up to date, in particular the information `"model_max_length": 512` is missing.
Because of that, the value of `model_max_length` is a very big integer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("camembert/camembert-base-ccnet", use_fast=True)
tokenizer.model_max_length
# Out[4]: 1000000000000000019884624838656
```
To fix it, the field model_max_length has to be added to the config file.
## Expected behavior
I would expect `tokenizer.model_max_length` to be equal to 512.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8116/comments | https://api.github.com/repos/huggingface/transformers/issues/8116/events | https://github.com/huggingface/transformers/pull/8116 | 731,335,263 | MDExOlB1bGxSZXF1ZXN0NTExNDU4MjUy | 8,116 | Add labels padding in tokenization_utils_base.py | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there! Thanks for your PR! I see a few problems with this approach.\r\n1. Not all labels need to be padded. If you are doing classification (with one or multiple labels) you don't want to pad them\r\n2. I imagine you are in a token classification problem, and in those, the number of labels is not necessarily the same as the number of tokens, as the labels are for words and tokens can be parts of words.\r\n\r\nI think the proper fix is to create an option in `DataCollatorWithPadding` to activate label padding (so a flag `pad_labels_too` or something like that) that then pads the labels to the maximum length of the labels (so `difference` that you use here might be a different number for the labels).",
"Thanks for the reply!\r\n\r\nConsidering that different problem may pad labels differently, I think maybe it's better to leave it as is and use this:\r\n```python\r\nclass MyDataCollatorWithPadding(DataCollatorWithPadding):\r\n def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\r\n batch = super().__call__(features)\r\n # add custom label padding here\r\n return batch\r\n```\r\nJust came up with this. 😃 Not sure if it works.",
"Just tried it, the above code does not work, because the error is in `self.tokenizer.pad()`.\r\nHere is the truncated trace:\r\n```\r\nsrc/transformers/data/data_collator.py\", line 103, in __call__\r\n batch = self.tokenizer.pad(\r\nsrc/transformers/tokenization_utils_base.py\", line 2408, in pad\r\n return BatchEncoding(batch_outputs, tensor_type=return_tensors)\r\nsrc/transformers/tokenization_utils_base.py\", line 186, in __init__\r\n self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)\r\nsrc/transformers/tokenization_utils_base.py\", line 571, in convert_to_tensors\r\n raise ValueError(\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.\r\n```\r\n\r\nTherefore `pad_labels_too` needs to be in `tokenizer.pad()`.\r\n@sgugger \r\n> the number of labels is not necessarily the same as the number of tokens, as the labels are for words and tokens can be parts of words.\r\n\r\nMaybe we will need a `LabelPaddingStrategy` similar to `PaddingStrategy`. But I don't know what kinds of other label padding strategies needs to be added.",
"I think you should use the newly pushed DataCollatorForTokenClassification from #8274.",
"Very nice! I guess I will close this PR."
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
This PR makes `tokenizer.pad()` also pad `'labels'`.
I tried to use this:
https://github.com/huggingface/transformers/blob/8065fea87007fbf7542fc060ff8ddd0b5df567da/src/transformers/data/data_collator.py#L69
But since labels is not padded, the result cannot turn into a tensor. `ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same lengt
h.
`
This patch solves the problem.
It seems logical to me that `tokenizer.pad()` should also pad `'labels'`.
This portion of code is last changed in #4015 @n1t0 @thomwolf @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8116",
"html_url": "https://github.com/huggingface/transformers/pull/8116",
"diff_url": "https://github.com/huggingface/transformers/pull/8116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8116.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8115/comments | https://api.github.com/repos/huggingface/transformers/issues/8115/events | https://github.com/huggingface/transformers/pull/8115 | 731,303,456 | MDExOlB1bGxSZXF1ZXN0NTExNDMxNTA0 | 8,115 | Fix eval ref miss in Chinese WWM. | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Before we merge you'll need to run `make fixup` or `make style` at the root of your transformers clone to pass the code quality test.",
"I had not realized that LTP has pinned master to 3.2.0. We can't have a script in examples that doesn't run on master, so I suggest copying the current version and moving it in the examples/contrib folder (or hosting it on your GitHub if you prefer) while still linking to it from the README.\r\n\r\nWe are in the process of rewriting all examples (and this script as it is will change in the next few days) to match the current version of transformers/datasets so this master requirement is really important. ",
"> I had not realized that LTP has pinned master to 3.2.0. We can't have a script in examples that doesn't run on master, so I suggest copying the current version and moving it in the examples/contrib folder (or hosting it on your GitHub if you prefer) while still linking to it from the README.\r\n> \r\n> We are in the process of rewriting all examples (and this script as it is will change in the next few days) to match the current version of transformers/datasets so this master requirement is really important.\r\n\r\nSeem the requirements of LTP is fixed. \r\nSo I move to `run_chinese_ref.py` to `examples/contrib` and update readme.",
"Great! If you can just merge Lysandre's suggestion, this should be good to merge then.",
"Applied the suggestion, merging!"
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | Sorry for my reckless: I didn't add param for `eval_ref_file` when Chinese WWM.
It was found by @johnsonice in [here](https://github.com/huggingface/transformers/pull/7925#issuecomment-717701325).
So I fix it and update readme for Chinese WWM. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8115",
"html_url": "https://github.com/huggingface/transformers/pull/8115",
"diff_url": "https://github.com/huggingface/transformers/pull/8115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8115.patch",
"merged_at": 1604005719000
} |
https://api.github.com/repos/huggingface/transformers/issues/8114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8114/comments | https://api.github.com/repos/huggingface/transformers/issues/8114/events | https://github.com/huggingface/transformers/issues/8114 | 731,261,853 | MDU6SXNzdWU3MzEyNjE4NTM= | 8,114 | Pegasus: Error when training with increased input length | {
"login": "zephyrzilla",
"id": 8082613,
"node_id": "MDQ6VXNlcjgwODI2MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8082613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zephyrzilla",
"html_url": "https://github.com/zephyrzilla",
"followers_url": "https://api.github.com/users/zephyrzilla/followers",
"following_url": "https://api.github.com/users/zephyrzilla/following{/other_user}",
"gists_url": "https://api.github.com/users/zephyrzilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zephyrzilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zephyrzilla/subscriptions",
"organizations_url": "https://api.github.com/users/zephyrzilla/orgs",
"repos_url": "https://api.github.com/users/zephyrzilla/repos",
"events_url": "https://api.github.com/users/zephyrzilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/zephyrzilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"(This issue has nothing to do with pegasus or input dimension).\r\n\r\n+ `max_length` should not be set like that, it refers to the maximum length to generate.\r\n+ You probably don't want to initialize from random. \r\ne+ You should be using Seq2SeqTrainer, and this script https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py#L2\r\n\r\nYou may need to modify line 188 to pass `max_position_embeddings=2048` and line 189 to pass `model_max_length=2048`."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: MacOS Mojave (10.14.6)
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sshleifer
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Pegasus
The problem arises when using:
* [ ] the official example scripts: NA
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: NA
* [x] my own task or dataset: Long input training on the standard CNN/DM dataset.
## To reproduce
Steps to reproduce the behavior:
1. I am trying to train a Pegasus model using the following script on a larger input length.
```python
from transformers import PegasusConfig, PegasusForConditionalGeneration, PegasusTokenizer
from transformers import Trainer, TrainingArguments
from examples.seq2seq.utils import Seq2SeqDataset
config = PegasusConfig(
max_length=2048,
max_position_embeddings=2048,
encoder_layers=16,
decoder_layers=4,
num_beams=2
)
tokenizer = PegasusTokenizer.from_pretrained("sshleifer/distill-pegasus-cnn-16-4")
model = PegasusForConditionalGeneration(config=config)
dataset = Seq2SeqDataset(data_dir='data/cnn_dm', tokenizer=tokenizer, max_source_length=2048, max_target_length=150)
training_args = TrainingArguments(
output_dir="./data/output",
overwrite_output_dir=True,
num_train_epochs=1,
save_steps=10,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
```
2. I am getting the following error message
```
/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py:263: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. Setting `args.prediction_loss_only=True
FutureWarning,
0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/train_scratch.py", line 34, in <module>
trainer.train()
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py", line 756, in train
tr_loss += self.training_step(model, inputs)
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py", line 1056, in training_step
loss = self.compute_loss(model, inputs)
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py", line 1080, in compute_loss
outputs = model(**inputs)
File "/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'input_ids'
0%| | 0/2 [00:00<?, ?it/s]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expected behaviour is to be able to train this Pegasus model on CNN/DM dataset on longer input sequences (> 1024).
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8113/comments | https://api.github.com/repos/huggingface/transformers/issues/8113/events | https://github.com/huggingface/transformers/pull/8113 | 731,227,866 | MDExOlB1bGxSZXF1ZXN0NTExMzc1MDMw | 8,113 | [WIP] Add Tapas model | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you might need a rebase on the latest master as your PR seems to have taken master from a while ago (all the modifications in the README should not be there for instance). If it messes the diff, we can always close this PR and open a new one, the branch will be safe :-)",
"EDIT: I've always used `git fetch upstream` and `git rebase upstream/master`, before pushing my local `tapas_v3` branch to my fork on Github. I didn't know that all models starts with 1. now in the README (therefore I manually changed the numbering), should be fixed now, branch is up-to-date.",
"@sgugger should I keep rebasing this branch everyday, to keep up with master (as long as the code is not being reviewed)?\r\n\r\nAlso, is it normal that I have to do a force push everytime I perform a rebase and want to push to Github? Because when I want to do simply `git push -u origin tapas_v3`, I always get \r\n```\r\n(env) PS C:\\Users\\niels.rogge\\Documents\\Python projecten\\transformers> git push -u origin tapas_v3\r\nTo https://github.com/NielsRogge/transformers.git\r\n ! [rejected] tapas_v3 -> tapas_v3 (non-fast-forward)\r\nerror: failed to push some refs to 'https://github.com/NielsRogge/transformers.git'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```\r\n\r\nafter a local rebase. ",
"I'm far from being an expert on git and I don't use the command line anyway, so can't really help you with that.",
"> Hi! Thanks a lot @NielsRogge for implementing this model! It's a great model that definitely deserves its place in the huggingface library, and you did a great job implementing it! I'm reviewing the code below.\r\n\r\nThank you! Great to hear :)\r\n\r\n@LysandreJik I addressed all of the comments. To summarize:\r\n* `TapasConfig` and `TapasTokenizer` now inherit from `PretrainedConfig` and `PreTrainedTokenizer` respectively. For `TapasTokenizer`, a lot of the code of `tokenization_bert.py` was copied (such as the `BasicTokenizer` and `WordPieceTokenizer` classes), since the tokenization logic of text itself is the same. However, some things are different (see `tokenization_tapas.py`). \r\n* `Modeling_tapas_utilities.py` and `tokenization_tapas_utilities.py` are also gone now, they are added to the bottom of `modeling_tapas.py` and `tokenization_tapas.py` respectively. \r\n* `pandas` is not a dependency in the code (`pandas` is not imported in any of the files), but I assume you want to switch to `datasets` so that people can use Tapas using only the `Transformers` library? However, currently, in `tokenization_tapas.py` some Pandas logic is used, for example in the `_tokenize_table` function `.iterrows()` is used, so this will require some changes.\r\n* concerning git, I assume I should stop rebasing at some point? I can do it as long as I'm the only one committing?",
"Hey @LysandreJik, thank you for your feedback.\r\n\r\nI've fixed all comments that you had, apart from the tokenizer itself. \r\n\r\n> This is a complicated part, so please let us know if you would like some help along the way/if you want us to take over from here, in which case we would open PRs against your branch with proposals.\r\n\r\nI'm happy to accept PRs against my branch, because it's not that clear to me how the tokenizer should be implemented in the best possible way. Does that mean I should stop rebasing my branch with `upstream/master`? Since I read about the \"golden rule of rebasing\", which states to \"never use it on public branches\" 😄 \r\n\r\n",
"Alright, I'll take a look and open a PR on your fork with the proposed changes. Yes, please don't rebase on `master` anymore as it would mess up my history as I start working on your branch.\r\n\r\nWe can keep it as it is until we merge now, and fix the merge conflict as the last step.",
"Hi @NielsRogge, I'm nearly done with the tokenizer changes, but we're focusing on getting version v3.5.0 out today and tomorrow. I'll try to open a PR on your repository then. Please hold off from adding commits now, as rebasing/merging would be very painful now! :smile: ",
"@LysandreJik no worries, I'm not working on the `tapas_v3` branch.\r\n\r\nHowever, there's still some important work left to do in terms of preparing the data for the model. In the original implementation, this is done in 3 steps:\r\n\r\n## 1. TSV in the SQA format\r\nAny dataset (SQA, WTQ, WikiSQL) is first transformed into a TSV with the same columns as the SQA format:\r\n* id: id of the table-question pair, if applicable.\r\n* annotator: id of the annotator, if applicable.\r\n* position: integer indicating if the question is the first, second, third,... related to the table. Only required in case of conversational setup (such as SQA)\r\n* question: string\r\n* table_file: string, name of a csv file containing the tabular data\r\n* answer_text: list of strings (each string being a cell value that is part of the answer)\r\n* answer_coordinates: list of string tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer)\r\n* aggregation_label: only required in case of strong supervision for aggregation (such as WikiSQL-supervised)\r\n* answer_float: float answer to the question. Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL)\r\n\r\nIf people want to fine-tune `TapasForQuestionAnswering` on their own dataset, they must prepare it in this TSV format, and associated csv files containing the tabular data. It would be great if we can upload all datasets (SQA, WTQ, WikiSQL and WikiSQL-supervised) in SQA format to the HuggingFace datasets hub (they can easily be obtained from the official Tapas Github repo).\r\n\r\n## 2. Intermediate format: Interaction\r\nNext, each table-question pair is transformed into an intermediate protocol buffer message which the authors call **Interaction**. Its properties are defined [here](https://github.com/google-research/tapas/blob/master/tapas/protos/interaction.proto), and include things like Table, Question, Answer, AnswerCoordinate, Cell, NumericValue, NumericValueSpan, etc. Populating all the fields of an Interaction based on the TSV is defined in [interaction_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py), [interaction_utils_parser.py](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils_parser.py), [number_annotation_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/number_annotation_utils.py), [number_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/number_utils.py) and [text_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/text_utils.py).\r\n\r\n## 3. tf.train.Example\r\nFinally, each interaction is transformed into an actual training example (`tf.train.Example`), containing the input_ids, mask, etc. as `tf.train.Feature` objects. This is defined in [tf_example_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/tf_example_utils.py).\r\n\r\n_________________\r\n`TapasTokenizer` must be able to directly convert a row (or multiple rows, i.e. a batch) from a TSV file into a dictionary with PyTorch tensors as values (in other words, combine steps 2 and 3). The remaining work is basically step 2. As I worked with `Pandas` as standard format for tables in my implementation, my idea was to define regular Python classes for each property of the Interaction proto. That is why I have defined a `NumericValue` class, `NumericValueSpan` class, `Cell` class, `Date` class, etc. in `tokenization_tapas.py`. Instances of these classes are then created each time `TapasTokenizer` is called.\r\n\r\nI've noticed that the creation of the numeric values is not entirely correct in the `tapas_v3` branch. I'm now working on a correct implementation of this in a branch called `tapas_v3_up_to_date_with_master` (in which I also regularly rebase with upstream/master). It only involves changes to `tokenization_tapas.py`. The changes can eventually be added to `tapas_v3`. I'll wait until your PR is merged before I add those changes.\r\n\r\nSo my questions are:\r\n- for each of the `xxx_utils.py` files which are used in step 2, there are corresponding `xxx_utils_test.py` files. Could you help in setting up tests in `test_tokenization_tapas.py`, to make sure we're following the original implementation? \r\n- I'm still assuming that tables are `pandas` dataframes in `tokenization_tapas.py`. Is this OK? Or do you want to change to `dataset`? Wouldn't it be more logical to have SQA/WTQ/WikiSQL as `dataset` objects, and the actual tables as `pandas` dataframes? Pandas is not a dependency of `tokenization_tapas.py`, but tables must be provided as a Pandas dataframe to `TapasTokenizer`.\r\n\r\n\r\n\r\n",
"Hi @NielsRogge! I just finished the tokenizer and its tests. The tests were kind of painful, as the API is a bit different (accepting a dataframe instead of a string), so I had to override most of the tests.\r\n\r\nHere's how you can review:\r\n\r\n- I did a `make style` on your branch, as I find the code easier to navigate once it's on par with our style.\r\n- However, this introduces a bunch of changes that would make the PR hard to review.\r\n- In order to circumvent this, I've pushed two branches: \r\n - `tapas-style`, which is the exact branch you have, but with `make style` run on it and a few cosmetic adjustments\r\n - `tapas-final`, which builds on top of `tapas-style` to implement all the tokenizer API and tests\r\n- For you to review, the easiest would be to review the PR I opened [here](https://github.com/huggingface/transformers/pull/8482), which aims to merge `tapas-final` into `tapas-style`. This way you can see the actual changes and only these.\r\n- I described my changes in that PR's description so as not to clog this one.\r\n- I setup a todo list if items remaining on the tokenizer, which are not blocking for the merge.\r\n\r\nPlease review https://github.com/huggingface/transformers/pull/8482, and tell me if you're okay with the changes. If you're okay, I'll merge `tapas-final` into `tapas-style`, and open a PR on your fork with the branch `tapas-style`, which will have all the changes.\r\n\r\nThis is one of the best hands-on introduction to git you could ask for :smile: \r\n\r\n\r\nRegarding your questions about data processing:\r\n\r\n> for each of the xxx_utils.py files which are used in step 2, there are corresponding xxx_utils_test.py files. Could you help in setting up tests in test_tokenization_tapas.py, to make sure we're following the original implementation?\r\n\r\nYes, I can help you with that.\r\n\r\n> I'm still assuming that tables are pandas dataframes in tokenization_tapas.py. Is this OK? Or do you want to change to dataset? Wouldn't it be more logical to have SQA/WTQ/WikiSQL as dataset objects, and the actual tables as pandas dataframes? Pandas is not a dependency of tokenization_tapas.py, but tables must be provided as a Pandas dataframe to TapasTokenizer.\r\n\r\nActually, `datasets.Dataset` behave very similarly to `pd.DataFrame`s. Nevertheless, we can start with Pandas DataFrames for now, and change to dataset's Datasets once progress is made",
"@LysandreJik I have improved the parsing of numeric values of both the question and table in `prepare_for_model` of `tokenization_tapas.py` to reflect the original implementation. What it does is turn the cells of a table into `Cell` objects (with potentially associated `NumericValue` objects) and the question into a `Question` object (with potentially a list of associated `NumericValueSpan` objects), before adding numeric-related features.\r\n\r\nBesides this, I have fixed some of the comments I had on [my review of your PR](https://github.com/huggingface/transformers/pull/8482), and commented \"done\" on the ones that are fixed.\r\n\r\n## To do:\r\n\r\n- [x] Add correct implementation of `prev_label_ids` in case of a batch of table-question pairs (in case of a batch, all questions should refer to the same table). The implementation should reflect the [original implementation](https://github.com/google-research/tapas/blob/d8638f0909b3de32a85fe7491769d47d645d8e22/tapas/utils/tf_example_utils.py#L1155) as follows: for a given table-question pair in a batch, \r\n\r\n```\r\nprev_label_ids = self.get_answer_ids(\r\n column_ids, row_ids, table_data, answer_text, answer_coordinates\r\n )\r\n```\r\n\r\nHere, it's important that the `get_answer_ids` function is called with the `column_ids` and `row_ids` of the **current** table-question pair in the batch, but the `answer_text` and `answer_coordinates` of the **previous** table-question pair in the batch. \r\n\r\n- [x] Fix the error that I'm currently having in the colab notebooks above (see first message of this PR), when `answer_coordinates` and `answer_text` are provided to the tokenizer. However, what's weird is that when calling `TapasTokenizer` on the real SQA dev set, everything works fine. Might be that I'm doing something wrong with the coordinates and text I provide?\r\n- [x] Add support for the `drop_rows_to_fit` and `cell_trim_length` attributes of `TapasTokenizer`, which should reflect the original API (see also my [suggestion](https://github.com/huggingface/transformers/pull/8482#discussion_r522131827) on how this could be done for `cell_trim_length`). Also, setting `truncation=True` in `TapasTokenizer` doesn't do anything currently. \r\n- [x] I've added support for the special [EMPTY] token for empty cells in a table (based on the `format_text` method, see [here](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3f71b/tapas/utils/tf_example_utils.py#L330)). Does this have implications for the `add_special_tokens` method? I assume not? What about `get_special_tokens_mask`? To be verified.\r\n- [x] **Testing TapasTokenizer:** make sure that the PyTorch tensors that `TapasTokenizer` creates are exactly the same as those of the original implementation on the same input data. I've created a [notebook](https://colab.research.google.com/drive/1MzyO-QSA5PZNCNoWa2EIrSA8UEqyUZVp) that tests this. Currently there's a misalignment due to the fact that the original implementation tokenizes a cell value like \"1.0\" into [\"1\", \".\"], whereas my implementation tokenizes this into [\"1\", \".\", \"0\"]. Filed a Github issue to resolve this.\r\n- [x] **Testing forward pass:** make sure that `TapasForQuestionAnswering`/`TapasForSequenceClassification` return the same `sequence_output`, `pooled_output`, `logits`, and `loss` tensors as the original implementation on the same input data. I've created notebooks that test this:\r\n- SQA (`tapas_sqa_inter_masklm_base_reset`): [PyTorch](https://colab.research.google.com/drive/14bdSwdzvCF2gDF3L0z58IT1fSNzOXKey#scrollTo=6fvJbFF-xKfh) vs [Tensorflow](https://colab.research.google.com/drive/1KWD187cWDP-lOOwKjwzGtGfVR9UWzZVL#scrollTo=KTlX8ZEuRTBa) is giving me the same output (inference only). UPDATE: also loss calculation is OK, see [PyTorch](https://colab.research.google.com/drive/1z2ZRIBXOTk3Aqh6OYpRak6iJhs7e1l2S#scrollTo=2kakMASqmrG5) vs [Tensorflow](https://colab.research.google.com/drive/1Ba3jARJcAqRTd0uuPxOTjOrruIcXIV54#scrollTo=2ZIdZEJGw5RK).\r\n- WTQ (`tapas_wtq_wikisql_sqa_inter_masklm_base_reset`): [PyTorch](https://colab.research.google.com/drive/1Z4T9ZzMvg3vGZ3dSNWbiA4FVbgMMkq_9#scrollTo=EXS4MmCy8Dti) vs [Tensorflow](https://colab.research.google.com/drive/1klaSP99q2aicwpVV9GrmL5nvrPGqrSPH#scrollTo=SIE7bTJMVuSh). I'm getting the same `sequence_output` and `logits_aggregation` on the same input data :) UPDATE: also loss calculation is OK, see [PyTorch](https://colab.research.google.com/drive/19Uq6k1f1178okv80Julfa0Zg41fvFN9x#scrollTo=LEOCtWmWt2IH) vs [Tensorflow](https://colab.research.google.com/drive/1ScF4R7Au8gbC5lN1ehTQFdDknmr2mMRz#scrollTo=GLgez6jJx9Xc) notebooks.\r\n- Tabfact (`tapas_tabfact_inter_masklm_base_reset`): [PyTorch](https://colab.research.google.com/drive/1JDwWrwHSt8KhGBQ57BDlCFZEe0xMGTQA?usp=sharing#scrollTo=z6esPfPMFH1p) vs [Tensorflow](https://colab.research.google.com/drive/14-6VFjvrIiXsYPQEtv8MN-a8Mpo1UNH7#scrollTo=LYBkiqo38e7l) is giving me the same classification logits, confirming that the relative position embeddings implementation is OK. \r\n- [x] **Testing backward pass:** I've created a [notebook](https://colab.research.google.com/drive/17L97m7cq7J_pnUHmQW6-ksGGVpbbGryP#scrollTo=y0YzoGY24I0C) that fine-tunes `TapasForQuestionAnswering` on 10 examples of the WTQ test set, just to see if it's able to overfit them. I've tried both with randomly initialized classification heads, as well as with the already-finetuned WTQ model. This seems to work well for the former (it can overfit the cell selection, however for aggregation this seems more difficult - probably due to the weak supervision). However, for the already fine-tuned one, the loss stays zero after the third example already. Is this a bug, or is this possible? Update: confirmed by the author.\r\n\r\nActually, reviewing the code of `modeling_tapas.py`(loss calculation + backward pass) is the most important. ",
"> @NielsRogge if you want I can take care of the remaining steps for the tokenizer:\r\n> \r\n> > * Add support for the drop_rows_to_fit and cell_trim_length attributes of TapasTokenizer, which should reflect the original API (see also my suggestion on how this could be done for cell_trim_length). Also, setting truncation=True in TapasTokenizer doesn't do anything currently.\r\n> > * I've added support for the special [EMPTY] token for empty cells in a table (based on the format_text method, see here). Does this have implications for the add_special_tokens method? I assume not? What about get_special_tokens_mask? To be verified.\r\n\r\n@LysandreJik ok that would be great!\r\n\r\n\r\n",
"Thanks for the reviews, I've updated the requested changes and marked the ones I did as resolved. \r\n\r\n@LysandreJik, could you maybe fix the remaining comments? In short:\r\n\r\n* remove the encoder-decoder logic of `TapasModel` (only remove this from the API of `TapasModel`, but leave them in the code that's copied from BERT (that the user won't see and won't use). I'll let you do this since I don't want to mess up anything.\r\n* remove some tests and add a slow test as requested above\r\n\r\n... then I'll mark these as resolved. Besides these, there's also the truncation of `TapasTokenizer` which should still be implemented. I copied what was left here:\r\n* Add support for the `drop_rows_to_fit` and `cell_trim_length` attributes of `TapasTokenizer`, which should reflect the original API (see also [my suggestion](https://github.com/huggingface/transformers/pull/8482#discussion_r522131827) on how this could be done for `cell_trim_length`). The original implementation can be found [here](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3f71b/tapas/utils/tf_example_utils.py#L999).\r\n* Add support for the special `[EMPTY]` token for empty cells in a table (see the `_tokenize` method of `TapasTokenizer`, which now uses the `format_text` method as in the [original implementation](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3f71b/tapas/utils/tf_example_utils.py#L330)). Does this have implications for the `add_special_tokens` method? I assume not? What about `get_special_tokens_mask`? To be verified.\r\n* There was also a small discrepancy between the tokenization of TAPAS and the original implementation, see this [Github issue](https://github.com/google-research/tapas/issues/90#issuecomment-735723963). I don't expect this too big of an issue, but maybe you know more about this.\r\n\r\nAnd then I assume we're done 👍 (finally)",
"Sure, will do so. Probably tomorrow morning/afternoon!",
"Closing this one as the most up-to-date is now #9117 ."
] | 1,603 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Since the beginning of August, I'm working in my free time on incorporating the [Tapas](https://arxiv.org/abs/2004.02349) algorithm by Google AI in the Transformers library (because this library is awesome and I want to contribute to it!). Tapas is basically a BERT model with some clever modifications for natural language understanding related to **tabular data** (structured data like tables, or even HTML). Adding this model could foster research in this area 😄
Demo's of my current implementation:
* [colab notebook](https://colab.research.google.com/drive/1feRe1Jyjtw7iZVRiKWulP6WjBW6hBIJE?usp=sharing) to showcase `TapasForQuestionAnswering` on WTQ (WikiTable Questions by Stanford University)
* [colab notebook](https://colab.research.google.com/drive/1CDPUr7c8uCNnCtAmmFfj91j-sIzdcqym?usp=sharing) to showcase `TapasForQuestionAnswering`on SQA (Sequential Question Answering by Microsoft Research)
* [colab notebook](https://colab.research.google.com/drive/1JDwWrwHSt8KhGBQ57BDlCFZEe0xMGTQA?usp=sharing) to showcase `TapasForSequenceClassification` on TabFact (Table Fact checking, introduced at ICLR this year)
The model weights are available on the [original Github repository](https://github.com/google-research/tapas), and I wrote a conversion script (similar to other models in the Transformers library) to load them into their PyTorch counterpart.
I suggest reading the [paper](https://arxiv.org/abs/2004.02349) as well as my [notes](https://docs.google.com/document/d/1WIdZX6of1l-c4AmT909PT7Dpj57EfqUh8BBPaf9ztOw/edit?usp=sharing) to gain a full understanding of how the model works and how I implemented it. There's also a [blog post](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) by Google AI as well as a [video](https://www.youtube.com/watch?v=cIUtRNhY6Rw&ab_channel=YannicKilcher) by Yannic Kilcher explaining how the algorithm works.
The main classes are `TapasConfig`, `TapasModel`, `TapasForQuestionAnswering` and `TapasForSequenceClassification` which can all be found in `modeling_tapas.py`. I'm quite sure the models are OK, the output is the same as the Tensorflow implementation. I added a very extensive documentation (docstrings) to all classes, which you can view by running the `make html` command from the docs directory. Feedback appreciated!
However, there are 2 things for which I need some help/opinions to finish this work:
## 1. Making TapasTokenizer fully Transformers-compliant
To implement `TapasTokenizer`, I need some help/opinions. I suggest using Pandas dataframes as the central object for tabular data (as shown in the Colab notebooks above). and let the API be as follows:
```
from transformers import TapasTokenizer
import pandas as pd
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"],
'Date of birth': ["18 december 1963", "11 november 1974", "6 may 1961"]}
table = pd.DataFrame.from_dict(data)
queries = ["When was Brad Pitt born?",
"Which actor appeared in the least number of movies?",
"What is the average number of movies?"]
tokenizer = TapasTokenizer.from_pretrained("tapas-base-finetuned-wtq")
inputs = tokenizer(table=table, queries=queries)
```
Currently I've only implemented the `batch_encode_plus` method of TapasTokenizer, because it's not really clear to me how to make it fully compatible with the Transformers library, since the way that data is prepared for the model is a bit different compared to BERT/RoBERTa/etc (see also my notes above). It's also not straightforward to make it compatible with the different padding/truncation strategies of Transformers. Currently, the way it works is as follows: there’s a function `_get_token_budget` in `tokenization_tapas.py` that calculates the number of tokens left for the flattened table after tokenizing a question. This is currently set to `self.model_max_length - (len(question_tokens) + 2)` (+ 2 for the CLS and SEP tokens), as was done in the original implementation. There is a hyperparameter when initializing `TapasTokenizer` called `drop_rows_to_fit` which drops rows of the table to fit into the token budget if set to `True`. If it’s set to `False` and a table is too big, it throws a `ValueError` indicating 'too many rows'.
## 2. Testing
Currently I've written `test_modeling_tapas.py` (23 tests passed, 5 failed) and `test_modeling_tapas_utilities.py` (9 tests passed). However, there are 4 different settings to use `TapasForQuestionAnswering` (see my notes) and these all need to be tested (currently only 1 setting is tested) - some help here would be great. Besides this, tests should be added to see whether the model can be properly trained, as well as adding `test_tokenization_tapas.py` (which depends on how TapasTokenizer will be implemented).
Fixes the following issues (people requesting to add the model):
- #4166
- #4288
## Who can review?
I suggest @sgugger @LysandreJik since we already discussed this on the forum [here](https://discuss.huggingface.co/t/adding-a-new-model-to-transformers-with-additional-dependencies/916/15).
tokenizers: @mfuntowicz
DISCLAIMER: this is my first PR of my life, never done this before, hopefully I don't mess up anything (just got the Pro Git book 😛). I assume I should not use `git rebase` anymore now since this branch submitted as PR and should only use `git add`, `git commit` and `git push -u origin tapas_v3`? And `git pull origin tapas_v3` in case others make commits to my branch?
Is there a Slack channel where people can help me out in case I have git issues? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8113/reactions",
"total_count": 7,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8113/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8113",
"html_url": "https://github.com/huggingface/transformers/pull/8113",
"diff_url": "https://github.com/huggingface/transformers/pull/8113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8113.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8112/comments | https://api.github.com/repos/huggingface/transformers/issues/8112/events | https://github.com/huggingface/transformers/issues/8112 | 731,137,271 | MDU6SXNzdWU3MzExMzcyNzE= | 8,112 | Documentation code snippet has extra ) after model code | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, this has been fixed on: https://github.com/huggingface/transformers/pull/8082 and is now available in the `master` documentation and will be updated in the next version. Thanks for opening an issue!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Documentation at https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforsequenceclassification has code snippet
```
>> from transformers import RobertaTokenizer, TFRobertaForSequenceClassification
>> import tensorflow as tf
>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>> model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', return_dict=True))
>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
>> outputs = model(inputs)
>> loss = outputs.loss
>> logits = outputs.logits
```
in the fourth line there is extra `)` at the end.
This is issue is also for other model code snippet as well | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8111/comments | https://api.github.com/repos/huggingface/transformers/issues/8111/events | https://github.com/huggingface/transformers/issues/8111 | 731,123,700 | MDU6SXNzdWU3MzExMjM3MDA= | 8,111 | [Model] mT5 Cross-Lingual Model | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Will be a part of #6285",
"Hey, @sumanthd17 any update on this? ",
"@julien-c thanks for your amazing nlp lib. \r\nWhen do you plan to support mT5 ?\r\nWhen #6285 will be release ?\r\nCheers\r\nPhilippe ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | # 🌟 New model addition
## Model description
<!-- Important information -->
Multilingual T5 (mT5) is a massively multilingual pretrained text-to-text transformer model, trained following a similar recipe as T5.
Weights, code are available.
Github Repo: [mT5 Weights and Code](https://github.com/google-research/multilingual-t5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
## Open source status
* [x] the model implementation is available: [Implementation](https://github.com/google-research/multilingual-t5)
* [x] the model weights are available: [checkpoints](https://github.com/google-research/multilingual-t5#released-model-checkpoints)
* [x] who are the authors: (@craffel, @adarob)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8110/comments | https://api.github.com/repos/huggingface/transformers/issues/8110/events | https://github.com/huggingface/transformers/pull/8110 | 731,118,231 | MDExOlB1bGxSZXF1ZXN0NTExMjkzMjIw | 8,110 | [gh actions] run artifacts job always | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please check if it worked :)",
"Hmm, I guess it'd be have been more practical to experiment with the push job and not scheduled - but ok, let's wait - if it's not working, I will attack it on all fronts.",
"One more thing: \r\n\r\n[run_all_tests_torch_and_tf_gpu](https://github.com/stas00/transformers/blob/e248a114e7f13a970bd8b5e52c0a032c014f4a57/.github/workflows/self-scheduled.yml#L57) has 3 independent test suite runs and currently if one fails the others don't run! Which is not what is wanted I believe. I suggest that we add `if: always() ` to the last 2 test suites as they are independent from the first one. ",
"I'm fine with that as long as the workflow run can still be correctly marked as a failure.\r\n\r\nTo rephrase the requirement at the risk of redundancy, we want a red x next to the job when any test fails:\r\n\r\n\r\nScreenshot shows that we are meeting this requirement at the moment.\r\n",
"Yes, that's the idea. I think the intention of the `if` condition is to define only whether a job is to be run, and not impact the total outcome. But we will see that already with the results of this PR - as artifact upload job will surely succeed. If the total outcome is [x] and artifacts have run, then we can replicate that condition to the other test suites. ",
"OK, It did the trick, see: https://github.com/huggingface/transformers/actions/runs/334754818 \r\nSpecifically, as requested: the final result is [x] and the artifact job did run regardless.\r\n\r\nSo we can apply this `if: always` condition to other `pytest` jobs on the same workflow. There is a nuance of a possibility of pre-pytest jobs failing and the `pytest` jobs running anyway with this condition, but if that situation arrives, it makes no difference - those jobs will just immediately fail. \r\n\r\nNotes:\r\n\r\n* It puts the artifact files in the same place from different jobs, so we need to call that artifact upload job differently for each job\r\n* The so-so part is that the artifacts on github actions are provided as a single zipped file, so you have to first download the file, unpack it and only then you can see the results. \r\n* Moreover it doesn't show the artifact file until **all** jobs have completed, despite saying that the file was successfully uploaded.\r\n\r\n**A Possible workaround:**\r\n\r\nOne possible optimization here could be to `cat reports/report_tests_failures.txt` right after `pytest`, in a separate mini-step, so that you can immediately see just the failures and not wait for everything else to finish and go through the multiple steps to get to this file. (It has to be a separate step (name+run) not to affect the success/failure exit status of the `pytest` step.\r\n\r\nPlease review the outcome/my notes and let me know whether we proceed with this to other jobs.\r\n\r\nSpecifically to moving forward, we probably need to wait for this to be merged: https://github.com/huggingface/transformers/pull/8007 as it has multiple changes to the CI files.\r\n\r\n",
"i have read your report. It is very clear, thank you. let's try a careful cat solution where we keep the size of the results as small as reasonably possible. one or two screens of text that show which tests failed (and short tracebacks (pytest --tb=short) ). Thanks for the help this is going to be so much easier to use than the status quo. Let me know if further clarifications/decisions would be helpful, and feel free to push back if implementation is difficult. ",
"wrt/ proposed workaround:\r\n\r\nSince the proposed quick `cat` is going to be in its own collapsible \"tab\" and will have only failures, let's start with just `cat reports/report_tests_failures.txt` and we can create other types of reports should it prove too verbose, and just cut those instead.\r\n\r\nBut also I could probably create `reports/report_tests_failures_short.txt` report which will emulate `pytest --tb=short`, so that we will have both long and short reports.\r\n\r\nwrt/ the rest:\r\n\r\nit still stands, correct? i.e. we still want the full artifacts in github actions",
"> we still want the full artifacts in github actions\r\n\r\nYes, don't see any downside.",
"It looks like the errors are generated with either `--tb=long` or `--tb=short` at run time, so when the reports time comes they are already saved as one or the other, but not both.\r\n\r\nSo if we want the short and the long reports, one possibility is to generate the long report and then to try to make it shorter with some regex or some simple truncation - resulting in a short report.\r\n\r\nAnother approach that might work is collecting the failures as they happen - I need to investigate whether I can control the format in that hook or not without impacting the global reporting, as I'm sure that ideally we do want the full long report too. Please correct me if I'm wrong and `--tb=short` is sufficient for CIs (i.e. there will be no full long failures report anywhere - neither terminal logs nor report files), then it's easy.\r\n",
"I trust you to make those choices as you see fit. feel free to ignore tb=short.",
"I nailed it, got the cake and ate it too."
] | 1,603 | 1,604 | 1,603 | CONTRIBUTOR | null | I see that the recently added artifacts job won't run if the test job failed, which defeats the purpose. ([example](https://github.com/huggingface/transformers/runs/1317972448?check_suite_focus=true))
After some research it appears that adding `if: always()` may do the trick. Supposedly such a job should always be run regardless of the outcome of the previous jobs. Found it [here](https://github.community/t/continue-on-error-allow-failure-ui-indication/16773/2?u=stas00), documented [here](https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions#always).
Let's merge and see if it fixes the issue.
@LysandreJik, @sgugger, @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8110",
"html_url": "https://github.com/huggingface/transformers/pull/8110",
"diff_url": "https://github.com/huggingface/transformers/pull/8110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8110.patch",
"merged_at": 1603863919000
} |
https://api.github.com/repos/huggingface/transformers/issues/8109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8109/comments | https://api.github.com/repos/huggingface/transformers/issues/8109/events | https://github.com/huggingface/transformers/issues/8109 | 731,080,247 | MDU6SXNzdWU3MzEwODAyNDc= | 8,109 | T5Tokenizer: decode does not show special tokens | {
"login": "jsrozner",
"id": 1113285,
"node_id": "MDQ6VXNlcjExMTMyODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1113285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsrozner",
"html_url": "https://github.com/jsrozner",
"followers_url": "https://api.github.com/users/jsrozner/followers",
"following_url": "https://api.github.com/users/jsrozner/following{/other_user}",
"gists_url": "https://api.github.com/users/jsrozner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jsrozner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jsrozner/subscriptions",
"organizations_url": "https://api.github.com/users/jsrozner/orgs",
"repos_url": "https://api.github.com/users/jsrozner/repos",
"events_url": "https://api.github.com/users/jsrozner/events{/privacy}",
"received_events_url": "https://api.github.com/users/jsrozner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"T5: @patrickvonplaten I think you need to set `_additional_special_tokens`.",
"@jsrozner want to try to fix?",
"This is a duplicate of #5142 and will be fixed with the PR linked below. Thanks for reporting it - seems like multiple people were running into this issue!!!"
] | 1,603 | 1,605 | 1,605 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: N/a
- Using distributed or parallel set-up in script? N/a
### Who can help
examples/seq2seq: @sshleifer
-->
## Information
Model I am using (Bert, XLNet ...): T5Tokenizer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import T5Tokenizer
input = "word <pad> <unk> </s> </s>"
t5tokenizer = T5Tokenizer.from_pretrained('t5-small')
tokenized = t5tokenizer.batch_encode_plus([input], max_length=10, padding="longest", return_tensors="pt").input_ids
print(t5tokenizer.batch_decode(tokenized, skip_special_tokens=False, clean_up_tokenization_spaces=False))
```
IDs output: ` _word <pad> <unk> </s> </s>`
decode output: `word ⁇ `
## Expected behavior
The tokens should be shown in the decoded output, but everything except for the unknown token is dropped (no pad or EOS).
`convert_ids_to_tokens` followed by `convert_tokens_to_string` also drops the tokens.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8109/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8108/comments | https://api.github.com/repos/huggingface/transformers/issues/8108/events | https://github.com/huggingface/transformers/pull/8108 | 730,901,553 | MDExOlB1bGxSZXF1ZXN0NTExMTM2NjU0 | 8,108 | Support various BERT relative position embeddings | {
"login": "zhiheng-huang",
"id": 9144018,
"node_id": "MDQ6VXNlcjkxNDQwMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9144018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhiheng-huang",
"html_url": "https://github.com/zhiheng-huang",
"followers_url": "https://api.github.com/users/zhiheng-huang/followers",
"following_url": "https://api.github.com/users/zhiheng-huang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhiheng-huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhiheng-huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhiheng-huang/subscriptions",
"organizations_url": "https://api.github.com/users/zhiheng-huang/orgs",
"repos_url": "https://api.github.com/users/zhiheng-huang/repos",
"events_url": "https://api.github.com/users/zhiheng-huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhiheng-huang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @zhiheng-huang,\r\n\r\nThanks for the PR!\r\n\r\nIn general I'm fine with this PR - think adding more types of position embeddings to a model is OK. \r\nAlso can you rebase your PR to the most current version of `master` - I think you are working on a rather old version.\r\n\r\n",
"> Hey @zhiheng-huang,\r\n> \r\n> Thanks for the PR!\r\n> \r\n> In general I'm fine with this PR - think adding more types of position embeddings to a model is OK.\r\n> Also can you rebase your PR to the most current version of `master` - I think you are working on a rather old version.\r\n\r\nIt was rebased to master on 10/27, will rebase again for the new revision.",
"@patrickvonplaten, I did a rebase and I am not sure this is the correct way to review commit \"Address review comment\". Please let me know if there is a better way to upload the new diff.",
"Hey @zhiheng-huang,\r\n\r\nI think there was a problem with the rebase it seems like you added all commits on master on top of your PR. \r\nThis happens from time to time sadly :-/ \r\n\r\nThe way I'd fix it is to first save your changes of the last commit: https://github.com/huggingface/transformers/pull/8108/commits/ffe2e64c64f03c141cc085c8f3f509ae2e0992e2 somewhere (maybe a new branch).\r\n\r\nThen correctly reset the head of your branch before all other commitns were falsely added:\r\n```\r\ngit reset --hard 36729ee\r\n```\r\n\r\nThen add the single commit you saved in another branch\r\n\r\n\r\n```\r\ngit cherry-pick ffe2e64\r\n```\r\n\r\nand finally either you correctly rebase OR the safer option here would probably be to merge the master into your branch\r\n\r\n```\r\ngit fetch upstream/master\r\ngit merge upstream master\r\n```\r\n\r\nHope this helps!",
"> Hey @zhiheng-huang,\r\n> \r\n> I think there was a problem with the rebase it seems like you added all commits on master on top of your PR.\r\n> This happens from time to time sadly :-/\r\n> \r\n> The way I'd fix it is to first save your changes of the last commit: [ffe2e64](https://github.com/huggingface/transformers/commit/ffe2e64c64f03c141cc085c8f3f509ae2e0992e2) somewhere (maybe a new branch).\r\n> \r\n> Then correctly reset the head of your branch before all other commitns were falsely added:\r\n> \r\n> ```\r\n> git reset --hard 36729ee\r\n> ```\r\n> \r\n> Then add the single commit you saved in another branch\r\n> \r\n> ```\r\n> git cherry-pick ffe2e64\r\n> ```\r\n> \r\n> and finally either you correctly rebase OR the safer option here would probably be to merge the master into your branch\r\n> \r\n> ```\r\n> git fetch upstream/master\r\n> git merge upstream master\r\n> ```\r\n> \r\n> Hope this helps!\r\n\r\nThanks. @patrickvonplaten. this helps but I may have to revert the commits merged to zhiheng-huang:transformers-relative-embedding. I created a new PR at https://github.com/huggingface/transformers/pull/8276 to continue the review. Thanks!"
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
The default BERT model `bert-base-uncased` was pre-trained with absolute position embeddings. We provide three pre-trained models which were pre-trained on the same training data (BooksCorpus and English Wikipedia) as in the BERT model training, but with different relative position embeddings (Shaw et al., Self-Attention with Relative Position Representations, https://arxiv.org/abs/1803.02155 and Huang et al., Improve Transformer Models with Better Relative Position Embeddings, https://arxiv.org/abs/2009.13658, accepted in findings of EMNLP 2020). We show how to fine-tune these pre-trained models on SQuAD1.1 data set and we also report the EM and F1 score on SQuAD1.1 dev dataset. See examples/question-answering/README.md for more details.
Fixes # (issue)
N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @julien-c @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8108",
"html_url": "https://github.com/huggingface/transformers/pull/8108",
"diff_url": "https://github.com/huggingface/transformers/pull/8108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8108.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8107/comments | https://api.github.com/repos/huggingface/transformers/issues/8107/events | https://github.com/huggingface/transformers/pull/8107 | 730,869,661 | MDExOlB1bGxSZXF1ZXN0NTExMTA4MDQz | 8,107 | [testing] port test_trainer_distributed to distributed pytest + TestCasePlus enhancements | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM! cc @patrickvonplaten for awareness!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR:
* [x] ports `test_trainer_distributed` to run with pytest - it will skip if gpus < 2.
* [x] includes various improvements via refactoring now 3 use cases of distributed testing by extending `TestCasePlus` with a whole set of convenient features:
Feature 1: A set of fully resolved important file and dir path accessors.
In tests often we need to know where things are relative to the current test file, and it's not trivial since the test could be invoked from more than one directory or could reside in different sub-directories. This class solves this problem by sorting out all the basic paths and provides easy accessors to them:
* ``pathlib`` objects (all fully resolved):
- ``test_file_path`` - the current test file path (=``__file__``)
- ``test_file_dir`` - the directory containing the current test file
- ``tests_dir`` - the directory of the ``tests`` test suite
- ``examples_dir`` - the directory of the ``examples`` test suite
- ``repo_root_dir`` - the directory of the repository
- ``src_dir`` - the directory of ``src`` (i.e. where the ``transformers`` sub-dir resides)
* stringified paths - same as above but these return a string, rather than a ``pathlib`` object
- ``test_file_path_str``
- ``test_file_dir_str``
- ``tests_dir_str``
- ``examples_dir_str``
- ``repo_root_dir_str``
- ``src_dir_str``
Feature 2: Get a copy of the ``os.environ`` object that sets up ``PYTHONPATH`` correctly, depending on the test suite it's invoked from. This is useful for invoking external programs from the test suite - e.g. distributed training.
```
def test_whatever(self):
env = self.get_env()
# now call the external program, passing ``env`` to it
```
All these are also documented in `testing.rst`.
Fixes: #8058
@sgugger, @LysandreJik, @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8107",
"html_url": "https://github.com/huggingface/transformers/pull/8107",
"diff_url": "https://github.com/huggingface/transformers/pull/8107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8107.patch",
"merged_at": 1603900293000
} |
https://api.github.com/repos/huggingface/transformers/issues/8106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8106/comments | https://api.github.com/repos/huggingface/transformers/issues/8106/events | https://github.com/huggingface/transformers/pull/8106 | 730,848,281 | MDExOlB1bGxSZXF1ZXN0NTExMDg4ODQ4 | 8,106 | Move installation instructions to the top | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR clarifies the instructions to run the examples by moving the source install to the top and putting it in bold. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8106",
"html_url": "https://github.com/huggingface/transformers/pull/8106",
"diff_url": "https://github.com/huggingface/transformers/pull/8106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8106.patch",
"merged_at": 1603834341000
} |
https://api.github.com/repos/huggingface/transformers/issues/8105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8105/comments | https://api.github.com/repos/huggingface/transformers/issues/8105/events | https://github.com/huggingface/transformers/pull/8105 | 730,833,520 | MDExOlB1bGxSZXF1ZXN0NTExMDc1NzUy | 8,105 | New run_clm script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR adds an example of a causal language modeling fine-tuning (or training from scratch) using the 🤗 Datasets library. It supports loading a dataset via its name (from the hub) or local files. A test of training on a small text is added.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8105/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8105",
"html_url": "https://github.com/huggingface/transformers/pull/8105",
"diff_url": "https://github.com/huggingface/transformers/pull/8105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8105.patch",
"merged_at": 1603895939000
} |
https://api.github.com/repos/huggingface/transformers/issues/8104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8104/comments | https://api.github.com/repos/huggingface/transformers/issues/8104/events | https://github.com/huggingface/transformers/issues/8104 | 730,821,702 | MDU6SXNzdWU3MzA4MjE3MDI= | 8,104 | RagSequenceForGeneration how to get document texts retrieved in response to a query | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"retriever.index.get_doc_dicts(docs_dict[\"doc_ids\"])[0]['text'] gets me the text of the retrieved documents."
] | 1,603 | 1,605 | 1,603 | NONE | null | When I run the retriever separately, how can I find out the text of the documents (from the doc_ids ?) that are retrieved ?
I created the retriever using:
retriever = RagRetriever.from_pretrained(rag_example_args.rag_model_name,index_name="custom",passages_path=passages_path,index_path=index_path,n_docs=8)
tokenizer = RagTokenizer.from_pretrained(rag_example_args.rag_model_name)
model = RagSequenceForGeneration.from_pretrained(rag_example_args.rag_model_name,index_name="custom",indexed_dataset=dataset)
question_hidden_states = model.question_encoder(input_ids)[0]
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt")
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8104/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8103/comments | https://api.github.com/repos/huggingface/transformers/issues/8103/events | https://github.com/huggingface/transformers/issues/8103 | 730,783,615 | MDU6SXNzdWU3MzA3ODM2MTU= | 8,103 | run_language_modeling crashes with import cannot import name 'DataCollatorForWholeWordMask' from 'transformers' | {
"login": "spacemanidol",
"id": 3886120,
"node_id": "MDQ6VXNlcjM4ODYxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3886120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spacemanidol",
"html_url": "https://github.com/spacemanidol",
"followers_url": "https://api.github.com/users/spacemanidol/followers",
"following_url": "https://api.github.com/users/spacemanidol/following{/other_user}",
"gists_url": "https://api.github.com/users/spacemanidol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spacemanidol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spacemanidol/subscriptions",
"organizations_url": "https://api.github.com/users/spacemanidol/orgs",
"repos_url": "https://api.github.com/users/spacemanidol/repos",
"events_url": "https://api.github.com/users/spacemanidol/events{/privacy}",
"received_events_url": "https://api.github.com/users/spacemanidol/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As explained in the README of the examples, you need an [installation from source](https://huggingface.co/transformers/installation.html#installing-from-source) to run the examples, which you don't have, otherwise you would have this object.\r\n\r\nAlternatively, you can run the examples associated to your current version by using the files on the [last release tag](https://github.com/huggingface/transformers/releases/tag/v3.4.0)."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
## Information
I am trying to run the example for [language modeling](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) but can't get it to start. Import fails
Traceback (most recent call last):
File "run_language_modeling.py", line 32, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForWholeWordMask' from 'transformers' (/home/spacemanidol/miniconda3/envs/prunetransformer/lib/python3.8/site-packages/transformers/__init__.py)
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ X] the official example scripts: (give details below)
running the language modeling example script
The tasks I am working on is:
* [ X] an official GLUE/SQUaD task: (give the name)
Language modeling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create conda enviorment. install transformers from the source.
2. Run language modeling example script
## Expected behavior
Example runs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8103/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8102/comments | https://api.github.com/repos/huggingface/transformers/issues/8102/events | https://github.com/huggingface/transformers/pull/8102 | 730,713,306 | MDExOlB1bGxSZXF1ZXN0NTEwOTcyNjAw | 8,102 | Adjust setup so that all extras run on Windows | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This removes some of the extra deps if they don't exist on Windows, so that the install doesn't fail.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8102",
"html_url": "https://github.com/huggingface/transformers/pull/8102",
"diff_url": "https://github.com/huggingface/transformers/pull/8102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8102.patch",
"merged_at": 1603823989000
} |
https://api.github.com/repos/huggingface/transformers/issues/8101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8101/comments | https://api.github.com/repos/huggingface/transformers/issues/8101/events | https://github.com/huggingface/transformers/issues/8101 | 730,647,198 | MDU6SXNzdWU3MzA2NDcxOTg= | 8,101 | should be BartConfig.prefix None? | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | For bart-large-xsum/bart-large-cnn, currently set to `' '`
+ does not help evaluation performance vs setting to `None` (checked xsum and CNN)
+ It could help fine-tuning performance by serving as a work-around for the `add_prefix_space` issue?
Putting here in case others have thoughts.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8101/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8100/comments | https://api.github.com/repos/huggingface/transformers/issues/8100/events | https://github.com/huggingface/transformers/issues/8100 | 730,625,549 | MDU6SXNzdWU3MzA2MjU1NDk= | 8,100 | Rename swish to silu to give appropriate credit | {
"login": "TFUsers",
"id": 25044281,
"node_id": "MDQ6VXNlcjI1MDQ0Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/25044281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TFUsers",
"html_url": "https://github.com/TFUsers",
"followers_url": "https://api.github.com/users/TFUsers/followers",
"following_url": "https://api.github.com/users/TFUsers/following{/other_user}",
"gists_url": "https://api.github.com/users/TFUsers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TFUsers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TFUsers/subscriptions",
"organizations_url": "https://api.github.com/users/TFUsers/orgs",
"repos_url": "https://api.github.com/users/TFUsers/repos",
"events_url": "https://api.github.com/users/TFUsers/events{/privacy}",
"received_events_url": "https://api.github.com/users/TFUsers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is worth mentioning that PyTorch's SiLU op is an optimized implementation: https://github.com/pytorch/pytorch/pull/42976",
"You are correct, we should update and deprecate. Do you want to take a stab at a PR?"
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | The swish was originally coined the "SiLU" in https://arxiv.org/pdf/1606.08415.pdf and https://arxiv.org/abs/1702.03118 long before the swish paper. Renaming other peoples' exact same ideas is unacceptable and huggingface's naming convention implicitly erases the research and work of people outside of Google.
This request inspired by a [discussion](https://www.reddit.com/r/MachineLearning/comments/hkiyir/r_google_has_a_credit_assignment_problem_in/) and a recent [tensorflow issue](https://github.com/tensorflow/tensorflow/issues/41066), but this problem has been brought up every few months for the past few years. In light of recent efforts to make the ML community more equitable and _fair_, this is a no-brainer and long overdue.
**Will this change the current api? How?**
The API would replace the "swish" argument with the "silu" argument and deprecate the swish.
[PyTorch 1.7](https://pytorch.org/docs/1.7.0/generated/torch.nn.SiLU.html?highlight=silu) added the SiLU. Tensorflow added the [SiLU](https://github.com/tensorflow/tensorflow/blob/27d26a8d86bceda282ad9ba3e3116a00759d4ebc/tensorflow/python/ops/nn_impl.py#L517) and should be in the next version. Jax has already added the SiLU;
jax.nn.swish will eventually be deprecated and jax.nn.silu will be added and both of the aforementioned papers will be cited in the documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8100/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8100/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8099/comments | https://api.github.com/repos/huggingface/transformers/issues/8099/events | https://github.com/huggingface/transformers/issues/8099 | 730,569,513 | MDU6SXNzdWU3MzA1Njk1MTM= | 8,099 | Reformer implementation in Tensorflow | {
"login": "gcuder",
"id": 60609608,
"node_id": "MDQ6VXNlcjYwNjA5NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/60609608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gcuder",
"html_url": "https://github.com/gcuder",
"followers_url": "https://api.github.com/users/gcuder/followers",
"following_url": "https://api.github.com/users/gcuder/following{/other_user}",
"gists_url": "https://api.github.com/users/gcuder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gcuder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcuder/subscriptions",
"organizations_url": "https://api.github.com/users/gcuder/orgs",
"repos_url": "https://api.github.com/users/gcuder/repos",
"events_url": "https://api.github.com/users/gcuder/events{/privacy}",
"received_events_url": "https://api.github.com/users/gcuder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This would be cool! I don't believe it's on our roadmap currently (cc @patrickvonplaten), but should be part of a general TF overhaul we'll be doing in the coming months.\r\n\r\nNo date for that yet, but I think you can expect it in the future.",
"Could be a \"Good First Issue\" :D Yeah, I think this is not really on our roadmap because `Reformer` is a pretty complicated model (It tweaks the backprop pass) and does not really have pretrained weights :-/ ",
"Sounds like a good first project to deep dive in TensorFlow and rip a few hair out in the process :) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | # 🚀 Feature request
Since there is an implementation of the Reformer in Pytorch, my question is if there will be an implementation for Tensorflow too?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8099/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8098/comments | https://api.github.com/repos/huggingface/transformers/issues/8098/events | https://github.com/huggingface/transformers/issues/8098 | 730,551,164 | MDU6SXNzdWU3MzA1NTExNjQ= | 8,098 | RuntimeError: Trying to create tensor with negative dimension | {
"login": "davidliujiafeng",
"id": 20847058,
"node_id": "MDQ6VXNlcjIwODQ3MDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/20847058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidliujiafeng",
"html_url": "https://github.com/davidliujiafeng",
"followers_url": "https://api.github.com/users/davidliujiafeng/followers",
"following_url": "https://api.github.com/users/davidliujiafeng/following{/other_user}",
"gists_url": "https://api.github.com/users/davidliujiafeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidliujiafeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidliujiafeng/subscriptions",
"organizations_url": "https://api.github.com/users/davidliujiafeng/orgs",
"repos_url": "https://api.github.com/users/davidliujiafeng/repos",
"events_url": "https://api.github.com/users/davidliujiafeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidliujiafeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm experiencing the same issue. Setting `vocab_size=tokenizer.vocab_size` does not help.\r\n\r\nI've noticed if I artificially inflate the vocab_size (i.e. `vocab_size=tokenizer.vocab_size+1`), the negative dimension in the error will also be greater by 1 (ie `-199743` instead of `-199744`.",
"Hey! From your description it sounds like you haven't changed the cutoff points for adaptive embeddings. (the different sizes of the clusters for the hierarchical softmax generation). This causes an issue as the last cluster of embeddings, the one for the least frequent words, has size `vocab_size - cutoffs[-1]` so if the last cutoff is bigger than the vocab size, that's negative.\r\n\r\nNow for only 256 vocab words, adaptive embeddings don't really matter anyway, so I'd recommend running\r\n\r\n```\r\nfrom transformers import TransfoXLConfig, TransfoXLModel\r\nconfiguration = TransfoXLConfig(vocab_size=256, cutoffs=[])\r\nmodel = TransfoXLModel(configuration)\r\n```",
"This worked for me, thanks a lot @TevenLeScao ! If you had a larger vocab size would you just recommend setting the last cutoff to be `0 < cutoff < vocab_size`?\r\n",
"Ah actually I re-read the code and docs and the two ends of the cutoffs are already provided; they're appended later, so what you want is actually `cutoffs=[]`, even if `cutoffs=[0, 256]` seems to work anyway (I've edited my previous answer).\r\n\r\nIn any case, to answer your question, yes, for a larger vocab size it is actually quite helpful to have `0 < cutoff < vocab_size`! Personally I start considering doing that for a vocabulary on the order of a few tens of thousands - so like 40000 for example, but your mileage may vary, I recommend experimenting yourself and checking whether it makes a difference :) it should mostly help with memory use.",
"Awesome, thanks for the helpful advice! I'd posted about the same issue on https://discuss.huggingface.co/t/transfoxllmheadmodel-trying-to-create-tensor-with-negative-dimension-199500/1768/2 but it remained unanswered, so I linked your comment in that thread as a solution.",
"@TevenLeScao Thanks very much, it works great for me, close the issue now."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@TevenLeScao
## Information
I am using TransfoXLModel. The problem arises when running the code below (if I do not fill in vocab_size=256, it works fine):
* the example scripts:
```python
from transformers import TransfoXLConfig, TransfoXLModel
configuration = TransfoXLConfig(vocab_size=256)
model = TransfoXLModel(configuration)
```
## Error I get:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-323-7039580347ad> in <module>
3 configuration = TransfoXLConfig(vocab_size=256)
4 # Initializing a model from the configuration
----> 5 model = TransfoXLModel(configuration)
/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py in __init__(self, config)
736
737 self.word_emb = AdaptiveEmbedding(
--> 738 config.vocab_size, config.d_embed, config.d_model, config.cutoffs, div_val=config.div_val
739 )
740
/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py in __init__(self, n_token, d_embed, d_proj, cutoffs, div_val, sample_softmax)
421 l_idx, r_idx = self.cutoff_ends[i], self.cutoff_ends[i + 1]
422 d_emb_i = d_embed // (div_val ** i)
--> 423 self.emb_layers.append(nn.Embedding(r_idx - l_idx, d_emb_i))
424 self.emb_projs.append(nn.Parameter(torch.FloatTensor(d_proj, d_emb_i)))
425
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/sparse.py in __init__(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight)
107 self.scale_grad_by_freq = scale_grad_by_freq
108 if _weight is None:
--> 109 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
110 self.reset_parameters()
111 else:
RuntimeError: Trying to create tensor with negative dimension -199744: [-199744, 16]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8098/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8097/comments | https://api.github.com/repos/huggingface/transformers/issues/8097/events | https://github.com/huggingface/transformers/pull/8097 | 730,531,935 | MDExOlB1bGxSZXF1ZXN0NTEwODIwNTI1 | 8,097 | [wip/s2s] Aggregate Rouge Deterministically | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | Take randomness/sampling out of `calculate_rouge_score`.
Not ready for merge as the default should be changed back to not using this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8097/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8097",
"html_url": "https://github.com/huggingface/transformers/pull/8097",
"diff_url": "https://github.com/huggingface/transformers/pull/8097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8097.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8096/comments | https://api.github.com/repos/huggingface/transformers/issues/8096/events | https://github.com/huggingface/transformers/pull/8096 | 730,507,586 | MDExOlB1bGxSZXF1ZXN0NTEwODAwMjMz | 8,096 | Create README.md | {
"login": "ganeshkharad2",
"id": 20132026,
"node_id": "MDQ6VXNlcjIwMTMyMDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/20132026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ganeshkharad2",
"html_url": "https://github.com/ganeshkharad2",
"followers_url": "https://api.github.com/users/ganeshkharad2/followers",
"following_url": "https://api.github.com/users/ganeshkharad2/following{/other_user}",
"gists_url": "https://api.github.com/users/ganeshkharad2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ganeshkharad2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganeshkharad2/subscriptions",
"organizations_url": "https://api.github.com/users/ganeshkharad2/orgs",
"repos_url": "https://api.github.com/users/ganeshkharad2/repos",
"events_url": "https://api.github.com/users/ganeshkharad2/events{/privacy}",
"received_events_url": "https://api.github.com/users/ganeshkharad2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8096/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8096",
"html_url": "https://github.com/huggingface/transformers/pull/8096",
"diff_url": "https://github.com/huggingface/transformers/pull/8096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8096.patch",
"merged_at": 1607697913000
} |
https://api.github.com/repos/huggingface/transformers/issues/8095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8095/comments | https://api.github.com/repos/huggingface/transformers/issues/8095/events | https://github.com/huggingface/transformers/pull/8095 | 730,458,138 | MDExOlB1bGxSZXF1ZXN0NTEwNzU5MzMz | 8,095 | Fix IterableDataset with __len__ in Trainer | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Fix #8087
Bring back support for `IterableDataset` with `__len__` in Trainer. Changed in #7858
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8095/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8095",
"html_url": "https://github.com/huggingface/transformers/pull/8095",
"diff_url": "https://github.com/huggingface/transformers/pull/8095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8095.patch",
"merged_at": 1603806756000
} |
https://api.github.com/repos/huggingface/transformers/issues/8094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8094/comments | https://api.github.com/repos/huggingface/transformers/issues/8094/events | https://github.com/huggingface/transformers/issues/8094 | 730,435,997 | MDU6SXNzdWU3MzA0MzU5OTc= | 8,094 | Documentation error in question-answering pipeline | {
"login": "nishchay47b",
"id": 57478479,
"node_id": "MDQ6VXNlcjU3NDc4NDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/57478479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nishchay47b",
"html_url": "https://github.com/nishchay47b",
"followers_url": "https://api.github.com/users/nishchay47b/followers",
"following_url": "https://api.github.com/users/nishchay47b/following{/other_user}",
"gists_url": "https://api.github.com/users/nishchay47b/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nishchay47b/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nishchay47b/subscriptions",
"organizations_url": "https://api.github.com/users/nishchay47b/orgs",
"repos_url": "https://api.github.com/users/nishchay47b/repos",
"events_url": "https://api.github.com/users/nishchay47b/events{/privacy}",
"received_events_url": "https://api.github.com/users/nishchay47b/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You are correct! Do you want to open a PR fixing the docs?",
"Sure, I'll do that. Closing this issue for now, I'll refer to this in the PR.",
"There is one more thing, sorry I didn't realize this earlier, and please let me know if I am wrong, The [span_to_answer](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.QuestionAnsweringPipeline.span_to_answer) method of this pipeline object doesn't add much value, because the object itself when called with `question` and `context` will return the start and end indexes of the answer in the context. Moreover, it will give the wrong results because `span_to_answer` is expecting token index and not string indexes.\r\nIn continuation to above code;\r\n```python\r\nprint(len(tokenizer.tokenize(text)))\r\n# output: 96\r\n\r\n print(qa_pipeline.span_to_answer(text=text,start=int(result[0][\"start\"]), end=int(result[0][\"end\"])))\r\n# this will print {'answer': '', 'start': 0, 'end': 0} because the start index of the answer is 256 and end index is 264 while the \r\n# tokenized length is 96, this line in the function will stop the loop\r\n# if token_idx > end:\r\n# break\r\n\r\n```\r\nThis part in `__call__` [method](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.QuestionAnsweringPipeline.__call__) is already taking care of remapping to string indexes:\r\n```python\r\n # Convert the answer (tokens) back to the original text\r\n answers += [\r\n {\r\n \"score\": score.item(),\r\n \"start\": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),\r\n \"end\": np.where(char_to_word == feature.token_to_orig_map[e])[0][-1].item(),\r\n \"answer\": \" \".join(\r\n example.doc_tokens[feature.token_to_orig_map[s] : feature.token_to_orig_map[e] + 1]\r\n ),\r\n }\r\n```\r\nUnless we have some method to get the token index, this method I think will not work.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | Hi,
The [QuestionAnsweringPipeline](https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines#transformers.QuestionAnsweringPipeline.__call__) is returning the start and end position of the context string and not according to "the tokenized version of the input." as mentioned in the doc.
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
qa_pipeline = pipeline("question-answering")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models?"
]
result = qa_pipeline(question=questions, context=text)
print(result)
#this is the correct answer
print(text[int(result["start"]):int(result["end"])])
#this is not correct
print(tokenizer.tokenize(text[int(result["start"]):int(result["end"])))
```
Sorry if I misunderstood the doc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8094/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8093/comments | https://api.github.com/repos/huggingface/transformers/issues/8093/events | https://github.com/huggingface/transformers/pull/8093 | 730,410,577 | MDExOlB1bGxSZXF1ZXN0NTEwNzE5Nzg1 | 8,093 | Fully remove codecov | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If we try to go back to using it, perhaps there is a way to merge reports from 2 half-full tests - otherwise it creates an unnecessary slowdown on CI to run both together if we don't need to."
] | 1,603 | 1,603 | 1,603 | MEMBER | null | Fully removes codecov as we don't have any full test suite in circleci, alongside https://github.com/huggingface/transformers/commit/829b9f8cc321aa28396e6203e0f21eed26b132f7
If we want to put it back up, we can merge the two slow tests (TF + PT) and run coverage on that, but we should first take care of the inconsistencies in coverage as explained in https://github.com/huggingface/transformers/issues/6317
cc @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8093/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8093/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8093",
"html_url": "https://github.com/huggingface/transformers/pull/8093",
"diff_url": "https://github.com/huggingface/transformers/pull/8093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8093.patch",
"merged_at": 1603822453000
} |
https://api.github.com/repos/huggingface/transformers/issues/8092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8092/comments | https://api.github.com/repos/huggingface/transformers/issues/8092/events | https://github.com/huggingface/transformers/pull/8092 | 730,393,684 | MDExOlB1bGxSZXF1ZXN0NTEwNzA1NzAz | 8,092 | Fix DeBERTa docs | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Fix the DeBERTa docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8092",
"html_url": "https://github.com/huggingface/transformers/pull/8092",
"diff_url": "https://github.com/huggingface/transformers/pull/8092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8092.patch",
"merged_at": 1603804062000
} |
https://api.github.com/repos/huggingface/transformers/issues/8091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8091/comments | https://api.github.com/repos/huggingface/transformers/issues/8091/events | https://github.com/huggingface/transformers/pull/8091 | 730,388,706 | MDExOlB1bGxSZXF1ZXN0NTEwNzAxNTQ3 | 8,091 | Fix assertion error message for MLflowCallback | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8091/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8091",
"html_url": "https://github.com/huggingface/transformers/pull/8091",
"diff_url": "https://github.com/huggingface/transformers/pull/8091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8091.patch",
"merged_at": 1603809292000
} |
https://api.github.com/repos/huggingface/transformers/issues/8090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8090/comments | https://api.github.com/repos/huggingface/transformers/issues/8090/events | https://github.com/huggingface/transformers/pull/8090 | 730,352,380 | MDExOlB1bGxSZXF1ZXN0NTEwNjcwNjY1 | 8,090 | Update README.md | {
"login": "dartrevan",
"id": 24587263,
"node_id": "MDQ6VXNlcjI0NTg3MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dartrevan",
"html_url": "https://github.com/dartrevan",
"followers_url": "https://api.github.com/users/dartrevan/followers",
"following_url": "https://api.github.com/users/dartrevan/following{/other_user}",
"gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions",
"organizations_url": "https://api.github.com/users/dartrevan/orgs",
"repos_url": "https://api.github.com/users/dartrevan/repos",
"events_url": "https://api.github.com/users/dartrevan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dartrevan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8090/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8090",
"html_url": "https://github.com/huggingface/transformers/pull/8090",
"diff_url": "https://github.com/huggingface/transformers/pull/8090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8090.patch",
"merged_at": 1603974693000
} |
https://api.github.com/repos/huggingface/transformers/issues/8089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8089/comments | https://api.github.com/repos/huggingface/transformers/issues/8089/events | https://github.com/huggingface/transformers/pull/8089 | 730,351,415 | MDExOlB1bGxSZXF1ZXN0NTEwNjY5ODQz | 8,089 | Create README.md | {
"login": "dartrevan",
"id": 24587263,
"node_id": "MDQ6VXNlcjI0NTg3MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dartrevan",
"html_url": "https://github.com/dartrevan",
"followers_url": "https://api.github.com/users/dartrevan/followers",
"following_url": "https://api.github.com/users/dartrevan/following{/other_user}",
"gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions",
"organizations_url": "https://api.github.com/users/dartrevan/orgs",
"repos_url": "https://api.github.com/users/dartrevan/repos",
"events_url": "https://api.github.com/users/dartrevan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dartrevan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8089",
"html_url": "https://github.com/huggingface/transformers/pull/8089",
"diff_url": "https://github.com/huggingface/transformers/pull/8089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8089.patch",
"merged_at": 1603974223000
} |
https://api.github.com/repos/huggingface/transformers/issues/8088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8088/comments | https://api.github.com/repos/huggingface/transformers/issues/8088/events | https://github.com/huggingface/transformers/pull/8088 | 730,350,413 | MDExOlB1bGxSZXF1ZXN0NTEwNjY4OTg2 | 8,088 | Create README.md | {
"login": "dartrevan",
"id": 24587263,
"node_id": "MDQ6VXNlcjI0NTg3MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dartrevan",
"html_url": "https://github.com/dartrevan",
"followers_url": "https://api.github.com/users/dartrevan/followers",
"following_url": "https://api.github.com/users/dartrevan/following{/other_user}",
"gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions",
"organizations_url": "https://api.github.com/users/dartrevan/orgs",
"repos_url": "https://api.github.com/users/dartrevan/repos",
"events_url": "https://api.github.com/users/dartrevan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dartrevan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8088/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8088",
"html_url": "https://github.com/huggingface/transformers/pull/8088",
"diff_url": "https://github.com/huggingface/transformers/pull/8088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8088.patch",
"merged_at": 1603974210000
} |
https://api.github.com/repos/huggingface/transformers/issues/8087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8087/comments | https://api.github.com/repos/huggingface/transformers/issues/8087/events | https://github.com/huggingface/transformers/issues/8087 | 730,331,132 | MDU6SXNzdWU3MzAzMzExMzI= | 8,087 | #7858 breaks IterableDataset with __len__ in Trainer | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm sorry, but you could explain to me why an `IterableDataset` with a `__len__` is not a regular `Dataset`?",
"In my case, I wrap a `Dataset` using a class that inherits `IterableDataset`, and defines a `__len__()`.\r\nThe purpose is to implement smart batching[1]. I use `IterableDataset` so I can control how to iterate the data.\r\n\r\nI don't know if it's possible/easier to `Dataset`+`Sampler`, if so please let me know.\r\nAlso note that (after the change) if I drop `__len__()` to suppress the bug, I would then need to specify `max_iter` (or something like that), which is inconvenient.\r\n[1] (https://wandb.ai/pommedeterresautee/speed_training/reports/Train-HuggingFace-Models-Twice-As-Fast--VmlldzoxMDgzOTI)\r\n\r\n",
"It does seem a bit hacky but I guess we can add that test. Do you want to suggest a PR with the change?"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/08f534d2da47875a4b7eb1c125cfa7f0f3b79642/src/transformers/trainer.py#L381-L382
This used to be (before #7858)
```python
if isinstance(self.train_dataset, torch.utils.data.IterableDataset):
```
I am using IterableDataset with __len__ in Trainer. This change makes it return a sampler and results in an error later. `ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7fa32c57b340>`
Maybe change to this?
```python
if (isinstance(self.train_dataset, torch.utils.data.IterableDataset) or
not isinstance(self.train_dataset, collections.abc.Sized)):
```
@j-rossi-nl @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8087/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8086/comments | https://api.github.com/repos/huggingface/transformers/issues/8086/events | https://github.com/huggingface/transformers/issues/8086 | 730,307,848 | MDU6SXNzdWU3MzAzMDc4NDg= | 8,086 | Hello world example fail with transformers-3.4 | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We will probably need at least the full error trace",
"```\r\nCalling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embedding are fine-tuned or trained.\r\nTraceback (most recent call last):\r\n File \"/Users/gt/Desktop/transformers-3.4.0/my_code/test.py\", line 4, in <module>\r\n model = TFAutoModel.from_pretrained(\"/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-pytorch_model.bin\")\r\n File \"/Users/gt/Desktop/transformers-3.4.0/src/transformers/modeling_tf_auto.py\", line 493, in from_pretrained\r\n pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs\r\n File \"/Users/gt/Desktop/transformers-3.4.0/src/transformers/configuration_auto.py\", line 330, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/Users/gt/Desktop/transformers-3.4.0/src/transformers/configuration_utils.py\", line 374, in get_config_dict\r\n config_dict = cls._dict_from_json_file(resolved_config_file)\r\n File \"/Users/gt/Desktop/transformers-3.4.0/src/transformers/configuration_utils.py\", line 456, in _dict_from_json_file\r\n text = reader.read()\r\n File \"/Users/gt/Py36-tf1.4/lib/python3.6/codecs.py\", line 321, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n```",
"This line: `model = TFAutoModel.from_pretrained(\"/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-pytorch_model.bin\")`\r\n\r\nshould point to a directory contining both the model file and the configuration. Also, you're loading a `pytorch_model.bin` in a `TFAutoModel`, whereas this is a TensorFlow automodel.\r\n\r\n- You should make sure that you're loading from a directory containing either `(pytorch_model.bin, config.json)` for PyTorch, or `(tf_model.h5, config.json)` for TensorFlow\r\n- You can load a PyTorch model in TensorFlow, but you should specify `from_pt=True`, and you can load a TensorFlow model in PyTorch but you should specify the `from_tf=True` option.\r\n\r\nYou can find more information about this in the [quick tour](https://huggingface.co/transformers/quicktour.html#under-the-hood-pretrained-models).",
"@LysandreJik Thank you but\r\n\r\n```\r\nfrom transformers import AutoTokenizer, TFAutoModel\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"/Users/gt/Desktop/transformers-3.4.0/my_code/\")\r\nmodel = TFAutoModel.from_pretrained(\"/Users/gt/Desktop/transformers-3.4.0/my_code/\")\r\n\r\ninputs = tokenizer(\"Hello world!\", return_tensors=\"tf\")\r\noutputs = model(**inputs)\r\nprint(outputs)\r\n```\r\n\r\n\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gt/Desktop/transformers-3.4.0/my_code/test.py\", line 3, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\"/Users/gt/Desktop/transformers-3.4.0/my_code/\")\r\n File \"/Users/gt/Desktop/transformers-3.4.0/src/transformers/tokenization_auto.py\", line 333, in from_pretrained\r\n return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/Users/gt/Desktop/transformers-3.4.0/src/transformers/tokenization_utils_base.py\", line 1591, in from_pretrained\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name '/Users/gt/Desktop/transformers-3.4.0/my_code/' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed '/Users/gt/Desktop/transformers-3.4.0/my_code/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n```",
"Hello, as said before you need your files to be correctly named. Your model should be `pytorch_model.bin` or `tf_model.h5`, your configuration `config.json`, and your tokenizer should also be pointing to a file that has an appropriate name. You seem to be loading a `bert-base-cased` model, which should be used with a `BertTokenizer` that uses `vocab.txt` files, as it is shown in the error.",
"Thank you."
] | 1,603 | 1,604 | 1,603 | CONTRIBUTOR | null | ## Environment info
- `transformers` version:3.4
- Platform: Mac
- Python version: 3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
Download all the code of branch-3.4.0
```
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-tokenizer.json")
model = TFAutoModel.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-pytorch_model.bin")
inputs = tokenizer("Hello world!", return_tensors="tf")
outputs = model(**inputs)
print(outputs)
```
Model downloaded from https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/
Get error:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```
in line 4 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8086/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8085/comments | https://api.github.com/repos/huggingface/transformers/issues/8085/events | https://github.com/huggingface/transformers/pull/8085 | 730,240,957 | MDExOlB1bGxSZXF1ZXN0NTEwNTc4MjUw | 8,085 | Merge pull request #1 from huggingface/master | {
"login": "Clement25",
"id": 35480362,
"node_id": "MDQ6VXNlcjM1NDgwMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35480362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Clement25",
"html_url": "https://github.com/Clement25",
"followers_url": "https://api.github.com/users/Clement25/followers",
"following_url": "https://api.github.com/users/Clement25/following{/other_user}",
"gists_url": "https://api.github.com/users/Clement25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Clement25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Clement25/subscriptions",
"organizations_url": "https://api.github.com/users/Clement25/orgs",
"repos_url": "https://api.github.com/users/Clement25/repos",
"events_url": "https://api.github.com/users/Clement25/events{/privacy}",
"received_events_url": "https://api.github.com/users/Clement25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing as I believe this is an error :)"
] | 1,603 | 1,603 | 1,603 | NONE | null | Version track
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8085",
"html_url": "https://github.com/huggingface/transformers/pull/8085",
"diff_url": "https://github.com/huggingface/transformers/pull/8085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8085.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8084/comments | https://api.github.com/repos/huggingface/transformers/issues/8084/events | https://github.com/huggingface/transformers/pull/8084 | 730,158,073 | MDExOlB1bGxSZXF1ZXN0NTEwNTEwNTg1 | 8,084 | Fix tf export path type in notebooks/04-onnx-export.ipynb | {
"login": "mzmssg",
"id": 11887940,
"node_id": "MDQ6VXNlcjExODg3OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/11887940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzmssg",
"html_url": "https://github.com/mzmssg",
"followers_url": "https://api.github.com/users/mzmssg/followers",
"following_url": "https://api.github.com/users/mzmssg/following{/other_user}",
"gists_url": "https://api.github.com/users/mzmssg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzmssg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzmssg/subscriptions",
"organizations_url": "https://api.github.com/users/mzmssg/orgs",
"repos_url": "https://api.github.com/users/mzmssg/repos",
"events_url": "https://api.github.com/users/mzmssg/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzmssg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8084/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8084",
"html_url": "https://github.com/huggingface/transformers/pull/8084",
"diff_url": "https://github.com/huggingface/transformers/pull/8084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8084.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8083/comments | https://api.github.com/repos/huggingface/transformers/issues/8083/events | https://github.com/huggingface/transformers/issues/8083 | 730,104,512 | MDU6SXNzdWU3MzAxMDQ1MTI= | 8,083 | FastFormers into transformers | {
"login": "ykim362",
"id": 22177353,
"node_id": "MDQ6VXNlcjIyMTc3MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22177353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ykim362",
"html_url": "https://github.com/ykim362",
"followers_url": "https://api.github.com/users/ykim362/followers",
"following_url": "https://api.github.com/users/ykim362/following{/other_user}",
"gists_url": "https://api.github.com/users/ykim362/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ykim362/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykim362/subscriptions",
"organizations_url": "https://api.github.com/users/ykim362/orgs",
"repos_url": "https://api.github.com/users/ykim362/repos",
"events_url": "https://api.github.com/users/ykim362/events{/privacy}",
"received_events_url": "https://api.github.com/users/ykim362/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi! This is great, thanks for offering to contribute it! From what I understand, `FastFormers` contains several scripts that can be applied to `transformers` models out of the box, that is, training, distillation, pruning, using quantization alongside the onnx runtime and fp16 optimizations. \r\n\r\nIs that correct? If that is so, the easiest way would be to add the corresponding scripts to the `examples/` directory, probably under `examples/fastformers`. If there are modifications made to the model themselves, we can take a look together at how we can integrate those in the library.",
"Hi, thanks for your interest! From what I understand, I think your model falls in the category of dynamic acceleration. For these types of paper, I recommend you to integrate it to `examples/`, just like [PABEE](https://github.com/huggingface/transformers/tree/master/examples/bert-loses-patience) and [DeeBERT](https://github.com/huggingface/transformers/tree/master/examples/deebert). I've emailed you an invitation to our Slack channel if it works for you. cc @LysandreJik ",
" @LysandreJik yes, that is correct. Thanks @JetRunner, let's discuss more on the slack.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am sorry, but I have been fully loaded with some other stuffs. I won't be able to make a progress. I'd like to close this to avoid any confusion."
] | 1,603 | 1,632 | 1,632 | NONE | null | # 🌟 New model addition
## Model description
We just open-sourced [FastFormers](https://arxiv.org/abs/2010.13382) which are our SustaiNLP 2020 systems (FastFormers: Highly Efficient Transformer Models for Natural Language Understanding [paper](https://arxiv.org/abs/2010.13382)).
Currently, we are hosting this on our repository, but would like to merge it back to the transformers repository as an example.
our repo - https://github.com/microsoft/fastformers
For the purpose of the shared task, this is purely implemented with SuperGLUE data set.
So, it's dependent on Alex Wang(@W4ngatang)'s SuperGLUE data processing pipeline.
Also, many parts of the implementations are based on Alex'.
(https://github.com/W4ngatang/transformers/tree/superglue)
What would be the best way to merge this back?
## Open source status
* [x] the model implementation is available: https://github.com/microsoft/fastformers/blob/main/examples/fastformers/run_superglue.py
* [x] the model weights are available: demo systems are uploaded. https://github.com/microsoft/fastformers/releases/tag/v0.1-model
* [x] who are the authors: @ykim362
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8083/reactions",
"total_count": 25,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 4,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/8083/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8082/comments | https://api.github.com/repos/huggingface/transformers/issues/8082/events | https://github.com/huggingface/transformers/pull/8082 | 730,067,312 | MDExOlB1bGxSZXF1ZXN0NTEwNDM3NDM3 | 8,082 | Fix doc examples | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Fix many `{model_class}.from_pretrained())` -> `{model_class}.from_pretrained()`. Hope it helps.
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8082/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8082",
"html_url": "https://github.com/huggingface/transformers/pull/8082",
"diff_url": "https://github.com/huggingface/transformers/pull/8082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8082.patch",
"merged_at": 1603798166000
} |
https://api.github.com/repos/huggingface/transformers/issues/8081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8081/comments | https://api.github.com/repos/huggingface/transformers/issues/8081/events | https://github.com/huggingface/transformers/pull/8081 | 730,061,894 | MDExOlB1bGxSZXF1ZXN0NTEwNDMzMTIw | 8,081 | Move style_doc to extra_quality_checks | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ideally there should be a modified_py_and_rst_files to speed up this check (and only apply it to modified files), but this works in the meantime. @stas00 if you want to do that last bit of optimization, let me know, otherwise I'll do that later.",
"Go for it, @sgugger. The modified files var is there, so it should be easy to apply it anywhere.\r\n\r\nIf you get stuck I'm here to help."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Previously, if you have a doc style error in a .rst file,
```bash
python utils/style_doc.py $(modified_py_files) --max_len 119;
```
wouldn't catch it | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8081/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8081",
"html_url": "https://github.com/huggingface/transformers/pull/8081",
"diff_url": "https://github.com/huggingface/transformers/pull/8081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8081.patch",
"merged_at": 1603806127000
} |
https://api.github.com/repos/huggingface/transformers/issues/8080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8080/comments | https://api.github.com/repos/huggingface/transformers/issues/8080/events | https://github.com/huggingface/transformers/pull/8080 | 730,060,252 | MDExOlB1bGxSZXF1ZXN0NTEwNDMxODQz | 8,080 | Pre style | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,651 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8080/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8080",
"html_url": "https://github.com/huggingface/transformers/pull/8080",
"diff_url": "https://github.com/huggingface/transformers/pull/8080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8080.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8079/comments | https://api.github.com/repos/huggingface/transformers/issues/8079/events | https://github.com/huggingface/transformers/issues/8079 | 730,059,064 | MDU6SXNzdWU3MzAwNTkwNjQ= | 8,079 | Best practice to use this great repo for industry application. | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"download all the source code"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
To download all the source code and put them into pycharm or To use pip-install and use the API code?
## Details
I want to pretrain and fine-tune the models here on our own dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8078/comments | https://api.github.com/repos/huggingface/transformers/issues/8078/events | https://github.com/huggingface/transformers/issues/8078 | 730,057,519 | MDU6SXNzdWU3MzAwNTc1MTk= | 8,078 | Hope more GPT Chinese pretrained model. | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have you checked out the filtered list on the model hub? https://huggingface.co/models?filter=zh ",
"Thank you!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # 🚀 Feature request
Hope more GPT Chinese pretrained model.

Thank you very much. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8078/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8077/comments | https://api.github.com/repos/huggingface/transformers/issues/8077/events | https://github.com/huggingface/transformers/issues/8077 | 730,046,790 | MDU6SXNzdWU3MzAwNDY3OTA= | 8,077 | Longformer crashes for position embeddings indexing? | {
"login": "PxYu",
"id": 16612275,
"node_id": "MDQ6VXNlcjE2NjEyMjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/16612275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PxYu",
"html_url": "https://github.com/PxYu",
"followers_url": "https://api.github.com/users/PxYu/followers",
"following_url": "https://api.github.com/users/PxYu/following{/other_user}",
"gists_url": "https://api.github.com/users/PxYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PxYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PxYu/subscriptions",
"organizations_url": "https://api.github.com/users/PxYu/orgs",
"repos_url": "https://api.github.com/users/PxYu/repos",
"events_url": "https://api.github.com/users/PxYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/PxYu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, have you tried running this code without using CUDA (i.e., on CPU)? The errors are usually more intelligible that way.\r\n\r\nCould this be due to an OOM error and cuda not recovering from it?",
"> Hi, have you tried running this code without using CUDA (i.e., on CPU)? The errors are usually more intelligible that way.\r\n> \r\n> Could this be due to an OOM error and cuda not recovering from it?\r\n\r\nHi @LysandreJik , thanks for your response! I made some exploration following your suggestion: I move the job to CPU only and tried again. I think this time it has something to do with the position embeddings indexing. Error message:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune-marco.py\", line 94, in <module>\r\n marco.run()\r\n File \"/mnt/nfs/work1/user/user/LF-for-IR/Marco.py\", line 177, in run\r\n self.train()\r\n File \"/mnt/nfs/work1/user/user/LF-for-IR/Marco.py\", line 258, in train\r\n outputs = self.model(\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py\", line 1445, in forward\r\n outputs = self.longformer(\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py\", line 1261, in forward\r\n embedding_output = self.embeddings(\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py\", line 170, in forward\r\n position_embeddings = self.position_embeddings(position_ids)\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/sparse.py\", line 112, in forward\r\n return F.embedding(\r\n File \"/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/functional.py\", line 1724, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nIndexError: index out of range in self\r\n\r\n```\r\n\r\n So if I go to ```class LongformerEmbeddings(nn.Module)``` in ```modeling_longformer.py```and print the ```self.position_ids``` and ```position_ids``` inside the ```forward()``` function, I get:\r\n\r\n```\r\ntensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,\r\n 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,\r\n 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,\r\n 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,\r\n 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,\r\n 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,\r\n 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83,\r\n 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,\r\n 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,\r\n 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119,\r\n 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131,\r\n 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143,\r\n 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,\r\n 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167,\r\n 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179,\r\n 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,\r\n 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203,\r\n 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215,\r\n 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227,\r\n 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239,\r\n 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251,\r\n 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263,\r\n 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,\r\n 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287,\r\n 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299,\r\n 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311,\r\n 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323,\r\n 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335,\r\n 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347,\r\n 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359,\r\n 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371,\r\n 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383,\r\n 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395,\r\n 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407,\r\n 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419,\r\n 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431,\r\n 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443,\r\n 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455,\r\n 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467,\r\n 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479,\r\n 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491,\r\n 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503,\r\n 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515,\r\n 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527,\r\n 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539,\r\n 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551,\r\n 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563,\r\n 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575,\r\n 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587,\r\n 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599,\r\n 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611,\r\n 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623,\r\n 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635,\r\n 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647,\r\n 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659,\r\n 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671,\r\n 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683,\r\n 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695,\r\n 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707,\r\n 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719,\r\n 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731,\r\n 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743,\r\n 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755,\r\n 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767,\r\n 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779,\r\n 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791,\r\n 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803,\r\n 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815,\r\n 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827,\r\n 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839,\r\n 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851,\r\n 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863,\r\n 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875,\r\n 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887,\r\n 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899,\r\n 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911,\r\n 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923,\r\n 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935,\r\n 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947,\r\n 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959,\r\n 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971,\r\n 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983,\r\n 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995,\r\n 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007,\r\n 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019,\r\n 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031,\r\n 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043,\r\n 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055,\r\n 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067,\r\n 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079,\r\n 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091,\r\n 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103,\r\n 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115,\r\n 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127,\r\n 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139,\r\n 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151,\r\n 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163,\r\n 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175,\r\n 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187,\r\n 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199,\r\n 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211,\r\n 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223,\r\n 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235,\r\n 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247,\r\n 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259,\r\n 1260, 1261, 1262, 1263, 1264, 1265, 1266, 1267, 1268, 1269, 1270, 1271,\r\n 1272, 1273, 1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283,\r\n 1284, 1285, 1286, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295,\r\n 1296, 1297, 1298, 1299, 1300, 1301, 1302, 1303, 1304, 1305, 1306, 1307,\r\n 1308, 1309, 1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317, 1318, 1319,\r\n 1320, 1321, 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331,\r\n 1332, 1333, 1334, 1335, 1336, 1337, 1338, 1339, 1340, 1341, 1342, 1343,\r\n 1344, 1345, 1346, 1347, 1348, 1349, 1350, 1351, 1352, 1353, 1354, 1355,\r\n 1356, 1357, 1358, 1359, 1360, 1361, 1362, 1363, 1364, 1365, 1366, 1367,\r\n 1368, 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379,\r\n 1380, 1381, 1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391,\r\n 1392, 1393, 1394, 1395, 1396, 1397, 1398, 1399, 1400, 1401, 1402, 1403,\r\n 1404, 1405, 1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 1414, 1415,\r\n 1416, 1417, 1418, 1419, 1420, 1421, 1422, 1423, 1424, 1425, 1426, 1427,\r\n 1428, 1429, 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1438, 1439,\r\n 1440, 1441, 1442, 1443, 1444, 1445, 1446, 1447, 1448, 1449, 1450, 1451,\r\n 1452, 1453, 1454, 1455, 1456, 1457, 1458, 1459, 1460, 1461, 1462, 1463,\r\n 1464, 1465, 1466, 1467, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475,\r\n 1476, 1477, 1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1487,\r\n 1488, 1489, 1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, 1498, 1499,\r\n 1500, 1501, 1502, 1503, 1504, 1505, 1506, 1507, 1508, 1509, 1510, 1511,\r\n 1512, 1513, 1514, 1515, 1516, 1517, 1518, 1519, 1520, 1521, 1522, 1523,\r\n 1524, 1525, 1526, 1527, 1528, 1529, 1530, 1531, 1532, 1533, 1534, 1535,\r\n 1536, 1537, 1538, 1539, 1540, 1541, 1542, 1543, 1544, 1545, 1546, 1547,\r\n 1548, 1549, 1550, 1551, 1552, 1553, 1554, 1555, 1556, 1557, 1558, 1559,\r\n 1560, 1561, 1562, 1563, 1564, 1565, 1566, 1567, 1568, 1569, 1570, 1571,\r\n 1572, 1573, 1574, 1575, 1576, 1577, 1578, 1579, 1580, 1581, 1582, 1583,\r\n 1584, 1585, 1586, 1587, 1588, 1589, 1590, 1591, 1592, 1593, 1594, 1595,\r\n 1596, 1597, 1598, 1599, 1600, 1601, 1602, 1603, 1604, 1605, 1606, 1607,\r\n 1608, 1609, 1610, 1611, 1612, 1613, 1614, 1615, 1616, 1617, 1618, 1619,\r\n 1620, 1621, 1622, 1623, 1624, 1625, 1626, 1627, 1628, 1629, 1630, 1631,\r\n 1632, 1633, 1634, 1635, 1636, 1637, 1638, 1639, 1640, 1641, 1642, 1643,\r\n 1644, 1645, 1646, 1647, 1648, 1649, 1650, 1651, 1652, 1653, 1654, 1655,\r\n 1656, 1657, 1658, 1659, 1660, 1661, 1662, 1663, 1664, 1665, 1666, 1667,\r\n 1668, 1669, 1670, 1671, 1672, 1673, 1674, 1675, 1676, 1677, 1678, 1679,\r\n 1680, 1681, 1682, 1683, 1684, 1685, 1686, 1687, 1688, 1689, 1690, 1691,\r\n 1692, 1693, 1694, 1695, 1696, 1697, 1698, 1699, 1700, 1701, 1702, 1703,\r\n 1704, 1705, 1706, 1707, 1708, 1709, 1710, 1711, 1712, 1713, 1714, 1715,\r\n 1716, 1717, 1718, 1719, 1720, 1721, 1722, 1723, 1724, 1725, 1726, 1727,\r\n 1728, 1729, 1730, 1731, 1732, 1733, 1734, 1735, 1736, 1737, 1738, 1739,\r\n 1740, 1741, 1742, 1743, 1744, 1745, 1746, 1747, 1748, 1749, 1750, 1751,\r\n 1752, 1753, 1754, 1755, 1756, 1757, 1758, 1759, 1760, 1761, 1762, 1763,\r\n 1764, 1765, 1766, 1767, 1768, 1769, 1770, 1771, 1772, 1773, 1774, 1775,\r\n 1776, 1777, 1778, 1779, 1780, 1781, 1782, 1783, 1784, 1785, 1786, 1787,\r\n 1788, 1789, 1790, 1791, 1792, 1793, 1794, 1795, 1796, 1797, 1798, 1799,\r\n 1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811,\r\n 1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823,\r\n 1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835,\r\n 1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847,\r\n 1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859,\r\n 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871,\r\n 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883,\r\n 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895,\r\n 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907,\r\n 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919,\r\n 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931,\r\n 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943,\r\n 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955,\r\n 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967,\r\n 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979,\r\n 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991,\r\n 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003,\r\n 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015,\r\n 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027,\r\n 2028, 2029, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039,\r\n 2040, 2041, 2042, 2043, 2044, 2045, 2046, 2047, 2048, 2049, 2050, 2051,\r\n 2052, 2053, 2054, 2055, 2056, 2057, 2058, 2059, 2060, 2061, 2062, 2063,\r\n 2064, 2065, 2066, 2067, 2068, 2069, 2070, 2071, 2072, 2073, 2074, 2075,\r\n 2076, 2077, 2078, 2079, 2080, 2081, 2082, 2083, 2084, 2085, 2086, 2087,\r\n 2088, 2089, 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099,\r\n 2100, 2101, 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111,\r\n 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119, 2120, 2121, 2122, 2123,\r\n 2124, 2125, 2126, 2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135,\r\n 2136, 2137, 2138, 2139, 2140, 2141, 2142, 2143, 2144, 2145, 2146, 2147,\r\n 2148, 2149, 2150, 2151, 2152, 2153, 2154, 2155, 2156, 2157, 2158, 2159,\r\n 2160, 2161, 2162, 2163, 2164, 2165, 2166, 2167, 2168, 2169, 2170, 2171,\r\n 2172, 2173, 2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181, 2182, 2183,\r\n 2184, 2185, 2186, 2187, 2188, 2189, 2190, 2191, 2192, 2193, 2194, 2195,\r\n 2196, 2197, 2198, 2199, 2200, 2201, 2202, 2203, 2204, 2205, 2206, 2207,\r\n 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216, 2217, 2218, 2219,\r\n 2220, 2221, 2222, 2223, 2224, 2225, 2226, 2227, 2228, 2229, 2230, 2231,\r\n 2232, 2233, 2234, 2235, 2236, 2237, 2238, 2239, 2240, 2241, 2242, 2243,\r\n 2244, 2245, 2246, 2247, 2248, 2249, 2250, 2251, 2252, 2253, 2254, 2255,\r\n 2256, 2257, 2258, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266, 2267,\r\n 2268, 2269, 2270, 2271, 2272, 2273, 2274, 2275, 2276, 2277, 2278, 2279,\r\n 2280, 2281, 2282, 2283, 2284, 2285, 2286, 2287, 2288, 2289, 2290, 2291,\r\n 2292, 2293, 2294, 2295, 2296, 2297, 2298, 2299, 2300, 2301, 2302, 2303,\r\n 2304, 2305, 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315,\r\n 2316, 2317, 2318, 2319, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327,\r\n 2328, 2329, 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2338, 2339,\r\n 2340, 2341, 2342, 2343, 2344, 2345, 2346, 2347, 2348, 2349, 2350, 2351,\r\n 2352, 2353, 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363,\r\n 2364, 2365, 2366, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2375,\r\n 2376, 2377, 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387,\r\n 2388, 2389, 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2399,\r\n 2400, 2401, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411,\r\n 2412, 2413, 2414, 2415, 2416, 2417, 2418, 2419, 2420, 2421, 2422, 2423,\r\n 2424, 2425, 2426, 2427, 2428, 2429, 2430, 2431, 2432, 2433, 2434, 2435,\r\n 2436, 2437, 2438, 2439, 2440, 2441, 2442, 2443, 2444, 2445, 2446, 2447,\r\n 2448, 2449, 2450, 2451, 2452, 2453, 2454, 2455, 2456, 2457, 2458, 2459,\r\n 2460, 2461, 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2469, 2470, 2471,\r\n 2472, 2473, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483,\r\n 2484, 2485, 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2495,\r\n 2496, 2497, 2498, 2499, 2500, 2501, 2502, 2503, 2504, 2505, 2506, 2507,\r\n 2508, 2509, 2510, 2511, 2512, 2513, 2514, 2515, 2516, 2517, 2518, 2519,\r\n 2520, 2521, 2522, 2523, 2524, 2525, 2526, 2527, 2528, 2529, 2530, 2531,\r\n 2532, 2533, 2534, 2535, 2536, 2537, 2538, 2539, 2540, 2541, 2542, 2543,\r\n 2544, 2545, 2546, 2547, 2548, 2549, 2550, 2551, 2552, 2553, 2554, 2555,\r\n 2556, 2557, 2558, 2559, 2560, 2561, 2562, 2563, 2564, 2565, 2566, 2567,\r\n 2568, 2569, 2570, 2571, 2572, 2573, 2574, 2575, 2576, 2577, 2578, 2579,\r\n 2580, 2581, 2582, 2583, 2584, 2585, 2586, 2587, 2588, 2589, 2590, 2591,\r\n 2592, 2593, 2594, 2595, 2596, 2597, 2598, 2599, 2600, 2601, 2602, 2603,\r\n 2604, 2605, 2606, 2607, 2608, 2609, 2610, 2611, 2612, 2613, 2614, 2615,\r\n 2616, 2617, 2618, 2619, 2620, 2621, 2622, 2623, 2624, 2625, 2626, 2627,\r\n 2628, 2629, 2630, 2631, 2632, 2633, 2634, 2635, 2636, 2637, 2638, 2639,\r\n 2640, 2641, 2642, 2643, 2644, 2645, 2646, 2647, 2648, 2649, 2650, 2651,\r\n 2652, 2653, 2654, 2655, 2656, 2657, 2658, 2659, 2660, 2661, 2662, 2663,\r\n 2664, 2665, 2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2674, 2675,\r\n 2676, 2677, 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687,\r\n 2688, 2689, 2690, 2691, 2692, 2693, 2694, 2695, 2696, 2697, 2698, 2699,\r\n 2700, 2701, 2702, 2703, 2704, 2705, 2706, 2707, 2708, 2709, 2710, 2711,\r\n 2712, 2713, 2714, 2715, 2716, 2717, 2718, 2719, 2720, 2721, 2722, 2723,\r\n 2724, 2725, 2726, 2727, 2728, 2729, 2730, 2731, 2732, 2733, 2734, 2735,\r\n 2736, 2737, 2738, 2739, 2740, 2741, 2742, 2743, 2744, 2745, 2746, 2747,\r\n 2748, 2749, 2750, 2751, 2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759,\r\n 2760, 2761, 2762, 2763, 2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771,\r\n 2772, 2773, 2774, 2775, 2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783,\r\n 2784, 2785, 2786, 2787, 2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795,\r\n 2796, 2797, 2798, 2799, 2800, 2801, 2802, 2803, 2804, 2805, 2806, 2807,\r\n 2808, 2809, 2810, 2811, 2812, 2813, 2814, 2815, 2816, 2817, 2818, 2819,\r\n 2820, 2821, 2822, 2823, 2824, 2825, 2826, 2827, 2828, 2829, 2830, 2831,\r\n 2832, 2833, 2834, 2835, 2836, 2837, 2838, 2839, 2840, 2841, 2842, 2843,\r\n 2844, 2845, 2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855,\r\n 2856, 2857, 2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867,\r\n 2868, 2869, 2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879,\r\n 2880, 2881, 2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891,\r\n 2892, 2893, 2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903,\r\n 2904, 2905, 2906, 2907, 2908, 2909, 2910, 2911, 2912, 2913, 2914, 2915,\r\n 2916, 2917, 2918, 2919, 2920, 2921, 2922, 2923, 2924, 2925, 2926, 2927,\r\n 2928, 2929, 2930, 2931, 2932, 2933, 2934, 2935, 2936, 2937, 2938, 2939,\r\n 2940, 2941, 2942, 2943, 2944, 2945, 2946, 2947, 2948, 2949, 2950, 2951,\r\n 2952, 2953, 2954, 2955, 2956, 2957, 2958, 2959, 2960, 2961, 2962, 2963,\r\n 2964, 2965, 2966, 2967, 2968, 2969, 2970, 2971, 2972, 2973, 2974, 2975,\r\n 2976, 2977, 2978, 2979, 2980, 2981, 2982, 2983, 2984, 2985, 2986, 2987,\r\n 2988, 2989, 2990, 2991, 2992, 2993, 2994, 2995, 2996, 2997, 2998, 2999,\r\n 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010, 3011,\r\n 3012, 3013, 3014, 3015, 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023,\r\n 3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035,\r\n 3036, 3037, 3038, 3039, 3040, 3041, 3042, 3043, 3044, 3045, 3046, 3047,\r\n 3048, 3049, 3050, 3051, 3052, 3053, 3054, 3055, 3056, 3057, 3058, 3059,\r\n 3060, 3061, 3062, 3063, 3064, 3065, 3066, 3067, 3068, 3069, 3070, 3071,\r\n 3072, 3073, 3074, 3075, 3076, 3077, 3078, 3079, 3080, 3081, 3082, 3083,\r\n 3084, 3085, 3086, 3087, 3088, 3089, 3090, 3091, 3092, 3093, 3094, 3095,\r\n 3096, 3097, 3098, 3099, 3100, 3101, 3102, 3103, 3104, 3105, 3106, 3107,\r\n 3108, 3109, 3110, 3111, 3112, 3113, 3114, 3115, 3116, 3117, 3118, 3119,\r\n 3120, 3121, 3122, 3123, 3124, 3125, 3126, 3127, 3128, 3129, 3130, 3131,\r\n 3132, 3133, 3134, 3135, 3136, 3137, 3138, 3139, 3140, 3141, 3142, 3143,\r\n 3144, 3145, 3146, 3147, 3148, 3149, 3150, 3151, 3152, 3153, 3154, 3155,\r\n 3156, 3157, 3158, 3159, 3160, 3161, 3162, 3163, 3164, 3165, 3166, 3167,\r\n 3168, 3169, 3170, 3171, 3172, 3173, 3174, 3175, 3176, 3177, 3178, 3179,\r\n 3180, 3181, 3182, 3183, 3184, 3185, 3186, 3187, 3188, 3189, 3190, 3191,\r\n 3192, 3193, 3194, 3195, 3196, 3197, 3198, 3199, 3200, 3201, 3202, 3203,\r\n 3204, 3205, 3206, 3207, 3208, 3209, 3210, 3211, 3212, 3213, 3214, 3215,\r\n 3216, 3217, 3218, 3219, 3220, 3221, 3222, 3223, 3224, 3225, 3226, 3227,\r\n 3228, 3229, 3230, 3231, 3232, 3233, 3234, 3235, 3236, 3237, 3238, 3239,\r\n 3240, 3241, 3242, 3243, 3244, 3245, 3246, 3247, 3248, 3249, 3250, 3251,\r\n 3252, 3253, 3254, 3255, 3256, 3257, 3258, 3259, 3260, 3261, 3262, 3263,\r\n 3264, 3265, 3266, 3267, 3268, 3269, 3270, 3271, 3272, 3273, 3274, 3275,\r\n 3276, 3277, 3278, 3279, 3280, 3281, 3282, 3283, 3284, 3285, 3286, 3287,\r\n 3288, 3289, 3290, 3291, 3292, 3293, 3294, 3295, 3296, 3297, 3298, 3299,\r\n 3300, 3301, 3302, 3303, 3304, 3305, 3306, 3307, 3308, 3309, 3310, 3311,\r\n 3312, 3313, 3314, 3315, 3316, 3317, 3318, 3319, 3320, 3321, 3322, 3323,\r\n 3324, 3325, 3326, 3327, 3328, 3329, 3330, 3331, 3332, 3333, 3334, 3335,\r\n 3336, 3337, 3338, 3339, 3340, 3341, 3342, 3343, 3344, 3345, 3346, 3347,\r\n 3348, 3349, 3350, 3351, 3352, 3353, 3354, 3355, 3356, 3357, 3358, 3359,\r\n 3360, 3361, 3362, 3363, 3364, 3365, 3366, 3367, 3368, 3369, 3370, 3371,\r\n 3372, 3373, 3374, 3375, 3376, 3377, 3378, 3379, 3380, 3381, 3382, 3383,\r\n 3384, 3385, 3386, 3387, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395,\r\n 3396, 3397, 3398, 3399, 3400, 3401, 3402, 3403, 3404, 3405, 3406, 3407,\r\n 3408, 3409, 3410, 3411, 3412, 3413, 3414, 3415, 3416, 3417, 3418, 3419,\r\n 3420, 3421, 3422, 3423, 3424, 3425, 3426, 3427, 3428, 3429, 3430, 3431,\r\n 3432, 3433, 3434, 3435, 3436, 3437, 3438, 3439, 3440, 3441, 3442, 3443,\r\n 3444, 3445, 3446, 3447, 3448, 3449, 3450, 3451, 3452, 3453, 3454, 3455,\r\n 3456, 3457, 3458, 3459, 3460, 3461, 3462, 3463, 3464, 3465, 3466, 3467,\r\n 3468, 3469, 3470, 3471, 3472, 3473, 3474, 3475, 3476, 3477, 3478, 3479,\r\n 3480, 3481, 3482, 3483, 3484, 3485, 3486, 3487, 3488, 3489, 3490, 3491,\r\n 3492, 3493, 3494, 3495, 3496, 3497, 3498, 3499, 3500, 3501, 3502, 3503,\r\n 3504, 3505, 3506, 3507, 3508, 3509, 3510, 3511, 3512, 3513, 3514, 3515,\r\n 3516, 3517, 3518, 3519, 3520, 3521, 3522, 3523, 3524, 3525, 3526, 3527,\r\n 3528, 3529, 3530, 3531, 3532, 3533, 3534, 3535, 3536, 3537, 3538, 3539,\r\n 3540, 3541, 3542, 3543, 3544, 3545, 3546, 3547, 3548, 3549, 3550, 3551,\r\n 3552, 3553, 3554, 3555, 3556, 3557, 3558, 3559, 3560, 3561, 3562, 3563,\r\n 3564, 3565, 3566, 3567, 3568, 3569, 3570, 3571, 3572, 3573, 3574, 3575,\r\n 3576, 3577, 3578, 3579, 3580, 3581, 3582, 3583, 3584, 3585, 3586, 3587,\r\n 3588, 3589, 3590, 3591, 3592, 3593, 3594, 3595, 3596, 3597, 3598, 3599,\r\n 3600, 3601, 3602, 3603, 3604, 3605, 3606, 3607, 3608, 3609, 3610, 3611,\r\n 3612, 3613, 3614, 3615, 3616, 3617, 3618, 3619, 3620, 3621, 3622, 3623,\r\n 3624, 3625, 3626, 3627, 3628, 3629, 3630, 3631, 3632, 3633, 3634, 3635,\r\n 3636, 3637, 3638, 3639, 3640, 3641, 3642, 3643, 3644, 3645, 3646, 3647,\r\n 3648, 3649, 3650, 3651, 3652, 3653, 3654, 3655, 3656, 3657, 3658, 3659,\r\n 3660, 3661, 3662, 3663, 3664, 3665, 3666, 3667, 3668, 3669, 3670, 3671,\r\n 3672, 3673, 3674, 3675, 3676, 3677, 3678, 3679, 3680, 3681, 3682, 3683,\r\n 3684, 3685, 3686, 3687, 3688, 3689, 3690, 3691, 3692, 3693, 3694, 3695,\r\n 3696, 3697, 3698, 3699, 3700, 3701, 3702, 3703, 3704, 3705, 3706, 3707,\r\n 3708, 3709, 3710, 3711, 3712, 3713, 3714, 3715, 3716, 3717, 3718, 3719,\r\n 3720, 3721, 3722, 3723, 3724, 3725, 3726, 3727, 3728, 3729, 3730, 3731,\r\n 3732, 3733, 3734, 3735, 3736, 3737, 3738, 3739, 3740, 3741, 3742, 3743,\r\n 3744, 3745, 3746, 3747, 3748, 3749, 3750, 3751, 3752, 3753, 3754, 3755,\r\n 3756, 3757, 3758, 3759, 3760, 3761, 3762, 3763, 3764, 3765, 3766, 3767,\r\n 3768, 3769, 3770, 3771, 3772, 3773, 3774, 3775, 3776, 3777, 3778, 3779,\r\n 3780, 3781, 3782, 3783, 3784, 3785, 3786, 3787, 3788, 3789, 3790, 3791,\r\n 3792, 3793, 3794, 3795, 3796, 3797, 3798, 3799, 3800, 3801, 3802, 3803,\r\n 3804, 3805, 3806, 3807, 3808, 3809, 3810, 3811, 3812, 3813, 3814, 3815,\r\n 3816, 3817, 3818, 3819, 3820, 3821, 3822, 3823, 3824, 3825, 3826, 3827,\r\n 3828, 3829, 3830, 3831, 3832, 3833, 3834, 3835, 3836, 3837, 3838, 3839,\r\n 3840, 3841, 3842, 3843, 3844, 3845, 3846, 3847, 3848, 3849, 3850, 3851,\r\n 3852, 3853, 3854, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863,\r\n 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875,\r\n 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884, 3885, 3886, 3887,\r\n 3888, 3889, 3890, 3891, 3892, 3893, 3894, 3895, 3896, 3897, 3898, 3899,\r\n 3900, 3901, 3902, 3903, 3904, 3905, 3906, 3907, 3908, 3909, 3910, 3911,\r\n 3912, 3913, 3914, 3915, 3916, 3917, 3918, 3919, 3920, 3921, 3922, 3923,\r\n 3924, 3925, 3926, 3927, 3928, 3929, 3930, 3931, 3932, 3933, 3934, 3935,\r\n 3936, 3937, 3938, 3939, 3940, 3941, 3942, 3943, 3944, 3945, 3946, 3947,\r\n 3948, 3949, 3950, 3951, 3952, 3953, 3954, 3955, 3956, 3957, 3958, 3959,\r\n 3960, 3961, 3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971,\r\n 3972, 3973, 3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983,\r\n 3984, 3985, 3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995,\r\n 3996, 3997, 3998, 3999, 4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007,\r\n 4008, 4009, 4010, 4011, 4012, 4013, 4014, 4015, 4016, 4017, 4018, 4019,\r\n 4020, 4021, 4022, 4023, 4024, 4025, 4026, 4027, 4028, 4029, 4030, 4031,\r\n 4032, 4033, 4034, 4035, 4036, 4037, 4038, 4039, 4040, 4041, 4042, 4043,\r\n 4044, 4045, 4046, 4047, 4048, 4049, 4050, 4051, 4052, 4053, 4054, 4055,\r\n 4056, 4057, 4058, 4059, 4060, 4061, 4062, 4063, 4064, 4065, 4066, 4067,\r\n 4068, 4069, 4070, 4071, 4072, 4073, 4074, 4075, 4076, 4077, 4078, 4079,\r\n 4080, 4081, 4082, 4083, 4084, 4085, 4086, 4087, 4088, 4089, 4090, 4091,\r\n 4092, 4093, 4094, 4095, 4096, 4097]]) torch.Size([1, 4098])\r\ntensor([[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,\r\n 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,\r\n 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,\r\n 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,\r\n 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,\r\n 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,\r\n 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,\r\n 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,\r\n 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,\r\n 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,\r\n 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,\r\n 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145,\r\n 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157,\r\n 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169,\r\n 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,\r\n 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,\r\n 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,\r\n 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217,\r\n 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229,\r\n 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241,\r\n 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253,\r\n 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265,\r\n 266, 267, 268, 269, 270, 271, 272, 273, 274, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n [ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,\r\n 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,\r\n 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,\r\n 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,\r\n 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,\r\n 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,\r\n 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,\r\n 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,\r\n 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,\r\n 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,\r\n 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,\r\n 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145,\r\n 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157,\r\n 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169,\r\n 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,\r\n 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,\r\n 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,\r\n 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217,\r\n 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229,\r\n 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241,\r\n 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253,\r\n 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265,\r\n 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277,\r\n 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,\r\n 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301,\r\n 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313,\r\n 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325,\r\n 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337,\r\n 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349,\r\n 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361,\r\n 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373,\r\n 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385,\r\n 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397,\r\n 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409,\r\n 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421,\r\n 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433,\r\n 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445,\r\n 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457,\r\n 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469,\r\n 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481,\r\n 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493,\r\n 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505,\r\n 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517,\r\n 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529,\r\n 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541,\r\n 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553,\r\n 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565,\r\n 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577,\r\n 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589,\r\n 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601,\r\n 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613,\r\n 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625,\r\n 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637,\r\n 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649,\r\n 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661,\r\n 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673,\r\n 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685,\r\n 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697,\r\n 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709,\r\n 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721,\r\n 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733,\r\n 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745,\r\n 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757,\r\n 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769,\r\n 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781,\r\n 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793,\r\n 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805,\r\n 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817,\r\n 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829,\r\n 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841,\r\n 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853,\r\n 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865,\r\n 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877,\r\n 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889,\r\n 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901,\r\n 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913,\r\n 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925,\r\n 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937,\r\n 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949,\r\n 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961,\r\n 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973,\r\n 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985,\r\n 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997,\r\n 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009,\r\n 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021,\r\n 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033,\r\n 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045,\r\n 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057,\r\n 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069,\r\n 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081,\r\n 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093,\r\n 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105,\r\n 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117,\r\n 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129,\r\n 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141,\r\n 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153,\r\n 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165,\r\n 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177,\r\n 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189,\r\n 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201,\r\n 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213,\r\n 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225,\r\n 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237,\r\n 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249,\r\n 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260, 1261,\r\n 1262, 1263, 1264, 1265, 1266, 1267, 1268, 1269, 1270, 1271, 1272, 1273,\r\n 1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283, 1284, 1285,\r\n 1286, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295, 1296, 1297,\r\n 1298, 1299, 1300, 1301, 1302, 1303, 1304, 1305, 1306, 1307, 1308, 1309,\r\n 1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321,\r\n 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331, 1332, 1333,\r\n 1334, 1335, 1336, 1337, 1338, 1339, 1340, 1341, 1342, 1343, 1344, 1345,\r\n 1346, 1347, 1348, 1349, 1350, 1351, 1352, 1353, 1354, 1355, 1356, 1357,\r\n 1358, 1359, 1360, 1361, 1362, 1363, 1364, 1365, 1366, 1367, 1368, 1369,\r\n 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1380, 1381,\r\n 1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391, 1392, 1393,\r\n 1394, 1395, 1396, 1397, 1398, 1399, 1400, 1401, 1402, 1403, 1404, 1405,\r\n 1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 1414, 1415, 1416, 1417,\r\n 1418, 1419, 1420, 1421, 1422, 1423, 1424, 1425, 1426, 1427, 1428, 1429,\r\n 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1438, 1439, 1440, 1441,\r\n 1442, 1443, 1444, 1445, 1446, 1447, 1448, 1449, 1450, 1451, 1452, 1453,\r\n 1454, 1455, 1456, 1457, 1458, 1459, 1460, 1461, 1462, 1463, 1464, 1465,\r\n 1466, 1467, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475, 1476, 1477,\r\n 1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1487, 1488, 1489,\r\n 1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, 1498, 1499, 1500, 1501,\r\n 1502, 1503, 1504, 1505, 1506, 1507, 1508, 1509, 1510, 1511, 1512, 1513,\r\n 1514, 1515, 1516, 1517, 1518, 1519, 1520, 1521, 1522, 1523, 1524, 1525,\r\n 1526, 1527, 1528, 1529, 1530, 1531, 1532, 1533, 1534, 1535, 1536, 1537,\r\n 1538, 1539, 1540, 1541, 1542, 1543, 1544, 1545, 1546, 1547, 1548, 1549,\r\n 1550, 1551, 1552, 1553, 1554, 1555, 1556, 1557, 1558, 1559, 1560, 1561,\r\n 1562, 1563, 1564, 1565, 1566, 1567, 1568, 1569, 1570, 1571, 1572, 1573,\r\n 1574, 1575, 1576, 1577, 1578, 1579, 1580, 1581, 1582, 1583, 1584, 1585,\r\n 1586, 1587, 1588, 1589, 1590, 1591, 1592, 1593, 1594, 1595, 1596, 1597,\r\n 1598, 1599, 1600, 1601, 1602, 1603, 1604, 1605, 1606, 1607, 1608, 1609,\r\n 1610, 1611, 1612, 1613, 1614, 1615, 1616, 1617, 1618, 1619, 1620, 1621,\r\n 1622, 1623, 1624, 1625, 1626, 1627, 1628, 1629, 1630, 1631, 1632, 1633,\r\n 1634, 1635, 1636, 1637, 1638, 1639, 1640, 1641, 1642, 1643, 1644, 1645,\r\n 1646, 1647, 1648, 1649, 1650, 1651, 1652, 1653, 1654, 1655, 1656, 1657,\r\n 1658, 1659, 1660, 1661, 1662, 1663, 1664, 1665, 1666, 1667, 1668, 1669,\r\n 1670, 1671, 1672, 1673, 1674, 1675, 1676, 1677, 1678, 1679, 1680, 1681,\r\n 1682, 1683, 1684, 1685, 1686, 1687, 1688, 1689, 1690, 1691, 1692, 1693,\r\n 1694, 1695, 1696, 1697, 1698, 1699, 1700, 1701, 1702, 1703, 1704, 1705,\r\n 1706, 1707, 1708, 1709, 1710, 1711, 1712, 1713, 1714, 1715, 1716, 1717,\r\n 1718, 1719, 1720, 1721, 1722, 1723, 1724, 1725, 1726, 1727, 1728, 1729,\r\n 1730, 1731, 1732, 1733, 1734, 1735, 1736, 1737, 1738, 1739, 1740, 1741,\r\n 1742, 1743, 1744, 1745, 1746, 1747, 1748, 1749, 1750, 1751, 1752, 1753,\r\n 1754, 1755, 1756, 1757, 1758, 1759, 1760, 1761, 1762, 1763, 1764, 1765,\r\n 1766, 1767, 1768, 1769, 1770, 1771, 1772, 1773, 1774, 1775, 1776, 1777,\r\n 1778, 1779, 1780, 1781, 1782, 1783, 1784, 1785, 1786, 1787, 1788, 1789,\r\n 1790, 1791, 1792, 1793, 1794, 1795, 1796, 1797, 1798, 1799, 1800, 1801,\r\n 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813,\r\n 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825,\r\n 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837,\r\n 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849,\r\n 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861,\r\n 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873,\r\n 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885,\r\n 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897,\r\n 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909,\r\n 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921,\r\n 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933,\r\n 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945,\r\n 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957,\r\n 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969,\r\n 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981,\r\n 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993,\r\n 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,\r\n 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017,\r\n 2018, 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029,\r\n 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2040, 2041,\r\n 2042, 2043, 2044, 2045, 2046, 2047, 2048, 2049, 2050, 2051, 2052, 2053,\r\n 2054, 2055, 2056, 2057, 2058, 2059, 2060, 2061, 2062, 2063, 2064, 2065,\r\n 2066, 2067, 2068, 2069, 2070, 2071, 2072, 2073, 2074, 2075, 2076, 2077,\r\n 2078, 2079, 2080, 2081, 2082, 2083, 2084, 2085, 2086, 2087, 2088, 2089,\r\n 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099, 2100, 2101,\r\n 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113,\r\n 2114, 2115, 2116, 2117, 2118, 2119, 2120, 2121, 2122, 2123, 2124, 2125,\r\n 2126, 2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135, 2136, 2137,\r\n 2138, 2139, 2140, 2141, 2142, 2143, 2144, 2145, 2146, 2147, 2148, 2149,\r\n 2150, 2151, 2152, 2153, 2154, 2155, 2156, 2157, 2158, 2159, 2160, 2161,\r\n 2162, 2163, 2164, 2165, 2166, 2167, 2168, 2169, 2170, 2171, 2172, 2173,\r\n 2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181, 2182, 2183, 2184, 2185,\r\n 2186, 2187, 2188, 2189, 2190, 2191, 2192, 2193, 2194, 2195, 2196, 2197,\r\n 2198, 2199, 2200, 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209,\r\n 2210, 2211, 2212, 2213, 2214, 2215, 2216, 2217, 2218, 2219, 2220, 2221,\r\n 2222, 2223, 2224, 2225, 2226, 2227, 2228, 2229, 2230, 2231, 2232, 2233,\r\n 2234, 2235, 2236, 2237, 2238, 2239, 2240, 2241, 2242, 2243, 2244, 2245,\r\n 2246, 2247, 2248, 2249, 2250, 2251, 2252, 2253, 2254, 2255, 2256, 2257,\r\n 2258, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266, 2267, 2268, 2269,\r\n 2270, 2271, 2272, 2273, 2274, 2275, 2276, 2277, 2278, 2279, 2280, 2281,\r\n 2282, 2283, 2284, 2285, 2286, 2287, 2288, 2289, 2290, 2291, 2292, 2293,\r\n 2294, 2295, 2296, 2297, 2298, 2299, 2300, 2301, 2302, 2303, 2304, 2305,\r\n 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315, 2316, 2317,\r\n 2318, 2319, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327, 2328, 2329,\r\n 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2338, 2339, 2340, 2341,\r\n 2342, 2343, 2344, 2345, 2346, 2347, 2348, 2349, 2350, 2351, 2352, 2353,\r\n 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363, 2364, 2365,\r\n 2366, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2375, 2376, 2377,\r\n 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387, 2388, 2389,\r\n 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2399, 2400, 2401,\r\n 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 2412, 2413,\r\n 2414, 2415, 2416, 2417, 2418, 2419, 2420, 2421, 2422, 2423, 2424, 2425,\r\n 2426, 2427, 2428, 2429, 2430, 2431, 2432, 2433, 2434, 2435, 2436, 2437,\r\n 2438, 2439, 2440, 2441, 2442, 2443, 2444, 2445, 2446, 2447, 2448, 2449,\r\n 2450, 2451, 2452, 2453, 2454, 2455, 2456, 2457, 2458, 2459, 2460, 2461,\r\n 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2469, 2470, 2471, 2472, 2473,\r\n 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483, 2484, 2485,\r\n 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2495, 2496, 2497,\r\n 2498, 2499, 2500, 2501, 2502, 2503, 2504, 2505, 2506, 2507, 2508, 2509,\r\n 2510, 2511, 2512, 2513, 2514, 2515, 2516, 2517, 2518, 2519, 2520, 2521,\r\n 2522, 2523, 2524, 2525, 2526, 2527, 2528, 2529, 2530, 2531, 2532, 2533,\r\n 2534, 2535, 2536, 2537, 2538, 2539, 2540, 2541, 2542, 2543, 2544, 2545,\r\n 2546, 2547, 2548, 2549, 2550, 2551, 2552, 2553, 2554, 2555, 2556, 2557,\r\n 2558, 2559, 2560, 2561, 2562, 2563, 2564, 2565, 2566, 2567, 2568, 2569,\r\n 2570, 2571, 2572, 2573, 2574, 2575, 2576, 2577, 2578, 2579, 2580, 2581,\r\n 2582, 2583, 2584, 2585, 2586, 2587, 2588, 2589, 2590, 2591, 2592, 2593,\r\n 2594, 2595, 2596, 2597, 2598, 2599, 2600, 2601, 2602, 2603, 2604, 2605,\r\n 2606, 2607, 2608, 2609, 2610, 2611, 2612, 2613, 2614, 2615, 2616, 2617,\r\n 2618, 2619, 2620, 2621, 2622, 2623, 2624, 2625, 2626, 2627, 2628, 2629,\r\n 2630, 2631, 2632, 2633, 2634, 2635, 2636, 2637, 2638, 2639, 2640, 2641,\r\n 2642, 2643, 2644, 2645, 2646, 2647, 2648, 2649, 2650, 2651, 2652, 2653,\r\n 2654, 2655, 2656, 2657, 2658, 2659, 2660, 2661, 2662, 2663, 2664, 2665,\r\n 2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2674, 2675, 2676, 2677,\r\n 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, 2689,\r\n 2690, 2691, 2692, 2693, 2694, 2695, 2696, 2697, 2698, 2699, 2700, 2701,\r\n 2702, 2703, 2704, 2705, 2706, 2707, 2708, 2709, 2710, 2711, 2712, 2713,\r\n 2714, 2715, 2716, 2717, 2718, 2719, 2720, 2721, 2722, 2723, 2724, 2725,\r\n 2726, 2727, 2728, 2729, 2730, 2731, 2732, 2733, 2734, 2735, 2736, 2737,\r\n 2738, 2739, 2740, 2741, 2742, 2743, 2744, 2745, 2746, 2747, 2748, 2749,\r\n 2750, 2751, 2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759, 2760, 2761,\r\n 2762, 2763, 2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771, 2772, 2773,\r\n 2774, 2775, 2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783, 2784, 2785,\r\n 2786, 2787, 2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795, 2796, 2797,\r\n 2798, 2799, 2800, 2801, 2802, 2803, 2804, 2805, 2806, 2807, 2808, 2809,\r\n 2810, 2811, 2812, 2813, 2814, 2815, 2816, 2817, 2818, 2819, 2820, 2821,\r\n 2822, 2823, 2824, 2825, 2826, 2827, 2828, 2829, 2830, 2831, 2832, 2833,\r\n 2834, 2835, 2836, 2837, 2838, 2839, 2840, 2841, 2842, 2843, 2844, 2845,\r\n 2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855, 2856, 2857,\r\n 2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867, 2868, 2869,\r\n 2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879, 2880, 2881,\r\n 2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891, 2892, 2893,\r\n 2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903, 2904, 2905,\r\n 2906, 2907, 2908, 2909, 2910, 2911, 2912, 2913, 2914, 2915, 2916, 2917,\r\n 2918, 2919, 2920, 2921, 2922, 2923, 2924, 2925, 2926, 2927, 2928, 2929,\r\n 2930, 2931, 2932, 2933, 2934, 2935, 2936, 2937, 2938, 2939, 2940, 2941,\r\n 2942, 2943, 2944, 2945, 2946, 2947, 2948, 2949, 2950, 2951, 2952, 2953,\r\n 2954, 2955, 2956, 2957, 2958, 2959, 2960, 2961, 2962, 2963, 2964, 2965,\r\n 2966, 2967, 2968, 2969, 2970, 2971, 2972, 2973, 2974, 2975, 2976, 2977,\r\n 2978, 2979, 2980, 2981, 2982, 2983, 2984, 2985, 2986, 2987, 2988, 2989,\r\n 2990, 2991, 2992, 2993, 2994, 2995, 2996, 2997, 2998, 2999, 3000, 3001,\r\n 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010, 3011, 3012, 3013,\r\n 3014, 3015, 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, 3024, 3025,\r\n 3026, 3027, 3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037,\r\n 3038, 3039, 3040, 3041, 3042, 3043, 3044, 3045, 3046, 3047, 3048, 3049,\r\n 3050, 3051, 3052, 3053, 3054, 3055, 3056, 3057, 3058, 3059, 3060, 3061,\r\n 3062, 3063, 3064, 3065, 3066, 3067, 3068, 3069, 3070, 3071, 3072, 3073,\r\n 3074, 3075, 3076, 3077, 3078, 3079, 3080, 3081, 3082, 3083, 3084, 3085,\r\n 3086, 3087, 3088, 3089, 3090, 3091, 3092, 3093, 3094, 3095, 3096, 3097,\r\n 3098, 3099, 3100, 3101, 3102, 3103, 3104, 3105, 3106, 3107, 3108, 3109,\r\n 3110, 3111, 3112, 3113, 3114, 3115, 3116, 3117, 3118, 3119, 3120, 3121,\r\n 3122, 3123, 3124, 3125, 3126, 3127, 3128, 3129, 3130, 3131, 3132, 3133,\r\n 3134, 3135, 3136, 3137, 3138, 3139, 3140, 3141, 3142, 3143, 3144, 3145,\r\n 3146, 3147, 3148, 3149, 3150, 3151, 3152, 3153, 3154, 3155, 3156, 3157,\r\n 3158, 3159, 3160, 3161, 3162, 3163, 3164, 3165, 3166, 3167, 3168, 3169,\r\n 3170, 3171, 3172, 3173, 3174, 3175, 3176, 3177, 3178, 3179, 3180, 3181,\r\n 3182, 3183, 3184, 3185, 3186, 3187, 3188, 3189, 3190, 3191, 3192, 3193,\r\n 3194, 3195, 3196, 3197, 3198, 3199, 3200, 3201, 3202, 3203, 3204, 3205,\r\n 3206, 3207, 3208, 3209, 3210, 3211, 3212, 3213, 3214, 3215, 3216, 3217,\r\n 3218, 3219, 3220, 3221, 3222, 3223, 3224, 3225, 3226, 3227, 3228, 3229,\r\n 3230, 3231, 3232, 3233, 3234, 3235, 3236, 3237, 3238, 3239, 3240, 3241,\r\n 3242, 3243, 3244, 3245, 3246, 3247, 3248, 3249, 3250, 3251, 3252, 3253,\r\n 3254, 3255, 3256, 3257, 3258, 3259, 3260, 3261, 3262, 3263, 3264, 3265,\r\n 3266, 3267, 3268, 3269, 3270, 3271, 3272, 3273, 3274, 3275, 3276, 3277,\r\n 3278, 3279, 3280, 3281, 3282, 3283, 3284, 3285, 3286, 3287, 3288, 3289,\r\n 3290, 3291, 3292, 3293, 3294, 3295, 3296, 3297, 3298, 3299, 3300, 3301,\r\n 3302, 3303, 3304, 3305, 3306, 3307, 3308, 3309, 3310, 3311, 3312, 3313,\r\n 3314, 3315, 3316, 3317, 3318, 3319, 3320, 3321, 3322, 3323, 3324, 3325,\r\n 3326, 3327, 3328, 3329, 3330, 3331, 3332, 3333, 3334, 3335, 3336, 3337,\r\n 3338, 3339, 3340, 3341, 3342, 3343, 3344, 3345, 3346, 3347, 3348, 3349,\r\n 3350, 3351, 3352, 3353, 3354, 3355, 3356, 3357, 3358, 3359, 3360, 3361,\r\n 3362, 3363, 3364, 3365, 3366, 3367, 3368, 3369, 3370, 3371, 3372, 3373,\r\n 3374, 3375, 3376, 3377, 3378, 3379, 3380, 3381, 3382, 3383, 3384, 3385,\r\n 3386, 3387, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395, 3396, 3397,\r\n 3398, 3399, 3400, 3401, 3402, 3403, 3404, 3405, 3406, 3407, 3408, 3409,\r\n 3410, 3411, 3412, 3413, 3414, 3415, 3416, 3417, 3418, 3419, 3420, 3421,\r\n 3422, 3423, 3424, 3425, 3426, 3427, 3428, 3429, 3430, 3431, 3432, 3433,\r\n 3434, 3435, 3436, 3437, 3438, 3439, 3440, 3441, 3442, 3443, 3444, 3445,\r\n 3446, 3447, 3448, 3449, 3450, 3451, 3452, 3453, 3454, 3455, 3456, 3457,\r\n 3458, 3459, 3460, 3461, 3462, 3463, 3464, 3465, 3466, 3467, 3468, 3469,\r\n 3470, 3471, 3472, 3473, 3474, 3475, 3476, 3477, 3478, 3479, 3480, 3481,\r\n 3482, 3483, 3484, 3485, 3486, 3487, 3488, 3489, 3490, 3491, 3492, 3493,\r\n 3494, 3495, 3496, 3497, 3498, 3499, 3500, 3501, 3502, 3503, 3504, 3505,\r\n 3506, 3507, 3508, 3509, 3510, 3511, 3512, 3513, 3514, 3515, 3516, 3517,\r\n 3518, 3519, 3520, 3521, 3522, 3523, 3524, 3525, 3526, 3527, 3528, 3529,\r\n 3530, 3531, 3532, 3533, 3534, 3535, 3536, 3537, 3538, 3539, 3540, 3541,\r\n 3542, 3543, 3544, 3545, 3546, 3547, 3548, 3549, 3550, 3551, 3552, 3553,\r\n 3554, 3555, 3556, 3557, 3558, 3559, 3560, 3561, 3562, 3563, 3564, 3565,\r\n 3566, 3567, 3568, 3569, 3570, 3571, 3572, 3573, 3574, 3575, 3576, 3577,\r\n 3578, 3579, 3580, 3581, 3582, 3583, 3584, 3585, 3586, 3587, 3588, 3589,\r\n 3590, 3591, 3592, 3593, 3594, 3595, 3596, 3597, 3598, 3599, 3600, 3601,\r\n 3602, 3603, 3604, 3605, 3606, 3607, 3608, 3609, 3610, 3611, 3612, 3613,\r\n 3614, 3615, 3616, 3617, 3618, 3619, 3620, 3621, 3622, 3623, 3624, 3625,\r\n 3626, 3627, 3628, 3629, 3630, 3631, 3632, 3633, 3634, 3635, 3636, 3637,\r\n 3638, 3639, 3640, 3641, 3642, 3643, 3644, 3645, 3646, 3647, 3648, 3649,\r\n 3650, 3651, 3652, 3653, 3654, 3655, 3656, 3657, 3658, 3659, 3660, 3661,\r\n 3662, 3663, 3664, 3665, 3666, 3667, 3668, 3669, 3670, 3671, 3672, 3673,\r\n 3674, 3675, 3676, 3677, 3678, 3679, 3680, 3681, 3682, 3683, 3684, 3685,\r\n 3686, 3687, 3688, 3689, 3690, 3691, 3692, 3693, 3694, 3695, 3696, 3697,\r\n 3698, 3699, 3700, 3701, 3702, 3703, 3704, 3705, 3706, 3707, 3708, 3709,\r\n 3710, 3711, 3712, 3713, 3714, 3715, 3716, 3717, 3718, 3719, 3720, 3721,\r\n 3722, 3723, 3724, 3725, 3726, 3727, 3728, 3729, 3730, 3731, 3732, 3733,\r\n 3734, 3735, 3736, 3737, 3738, 3739, 3740, 3741, 3742, 3743, 3744, 3745,\r\n 3746, 3747, 3748, 3749, 3750, 3751, 3752, 3753, 3754, 3755, 3756, 3757,\r\n 3758, 3759, 3760, 3761, 3762, 3763, 3764, 3765, 3766, 3767, 3768, 3769,\r\n 3770, 3771, 3772, 3773, 3774, 3775, 3776, 3777, 3778, 3779, 3780, 3781,\r\n 3782, 3783, 3784, 3785, 3786, 3787, 3788, 3789, 3790, 3791, 3792, 3793,\r\n 3794, 3795, 3796, 3797, 3798, 3799, 3800, 3801, 3802, 3803, 3804, 3805,\r\n 3806, 3807, 3808, 3809, 3810, 3811, 3812, 3813, 3814, 3815, 3816, 3817,\r\n 3818, 3819, 3820, 3821, 3822, 3823, 3824, 3825, 3826, 3827, 3828, 3829,\r\n 3830, 3831, 3832, 3833, 3834, 3835, 3836, 3837, 3838, 3839, 3840, 3841,\r\n 3842, 3843, 3844, 3845, 3846, 3847, 3848, 3849, 3850, 3851, 3852, 3853,\r\n 3854, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865,\r\n 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877,\r\n 3878, 3879, 3880, 3881, 3882, 3883, 3884, 3885, 3886, 3887, 3888, 3889,\r\n 3890, 3891, 3892, 3893, 3894, 3895, 3896, 3897, 3898, 3899, 3900, 3901,\r\n 3902, 3903, 3904, 3905, 3906, 3907, 3908, 3909, 3910, 3911, 3912, 3913,\r\n 3914, 3915, 3916, 3917, 3918, 3919, 3920, 3921, 3922, 3923, 3924, 3925,\r\n 3926, 3927, 3928, 3929, 3930, 3931, 3932, 3933, 3934, 3935, 3936, 3937,\r\n 3938, 3939, 3940, 3941, 3942, 3943, 3944, 3945, 3946, 3947, 3948, 3949,\r\n 3950, 3951, 3952, 3953, 3954, 3955, 3956, 3957, 3958, 3959, 3960, 3961,\r\n 3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971, 3972, 3973,\r\n 3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983, 3984, 3985,\r\n 3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995, 3996, 3997,\r\n 3998, 3999, 4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009,\r\n 4010, 4011, 4012, 4013, 4014, 4015, 4016, 4017, 4018, 4019, 4020, 4021,\r\n 4022, 4023, 4024, 4025, 4026, 4027, 4028, 4029, 4030, 4031, 4032, 4033,\r\n 4034, 4035, 4036, 4037, 4038, 4039, 4040, 4041, 4042, 4043, 4044, 4045,\r\n 4046, 4047, 4048, 4049, 4050, 4051, 4052, 4053, 4054, 4055, 4056, 4057,\r\n 4058, 4059, 4060, 4061, 4062, 4063, 4064, 4065, 4066, 4067, 4068, 4069,\r\n 4070, 4071, 4072, 4073, 4074, 4075, 4076, 4077, 4078, 4079, 4080, 4081,\r\n 4082, 4083, 4084, 4085, 4086, 4087, 4088, 4089, 4090, 4091, 4092, 4093,\r\n 4094, 4095, 4096, 4097, 4098, 4099, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]) torch.Size([2, 4608])\r\n```\r\n\r\nWell, the model pads the sequence from 4098 to 4608 (because it's a multiple of 512, though I did not tell it to do so), and then the ```position_ids``` it ended up with starts with 1 instead of 0 and then there are also 4098 and 4099 in it, which I believe should be out of index? Needs confirmation from developers! Thx!",
"@LysandreJik I am pretty sure that ```create_position_ids_from_input_ids()``` in modeling_longformer.py can generate position_ids that are larger than 4097 and lead to out-of-index problem.",
"Hey @PxYu - yeah the reason is that even though `Longformer` has a `max_position_embeddings` of 4098 in its official config: https://huggingface.co/allenai/longformer-base-4096, the maximum input it can handle is only `4096` or otherwise the `create_position_ids` function will crash. \r\n\r\nThis problem is analogues for `Roberta` as discussed here: https://github.com/huggingface/transformers/pull/8044#issuecomment-716513140 .\r\n\r\n@LysandreJik - I think it's wrong that Roberta and Longformer (as it was built on Roberta) have `max_position_embeddings` of 514 and 4098 respectively. I think we should definitely change the default `max_position_embeddings` in `RobertaConfig` and `LongformerConfig` and even if it breaks backward compatibility, I would advocate for changing the parameter in the configs of the main models as well. Someone using a `input_ids` of length `max_position_embeddings = 514 or = 4098` would probably have led to errors anyways IMO. These issues will probably happen more often otherwise. What do you think? ",
"As seen offline, having the `max_position_embeddings` set to 514 is an unfortunate decision we made when implementing the RoBERTa model and inheriting from the BERT model, a behavior we've since changed.\r\n\r\nUnfortunately, changing the `max_position_embeddings` would be impossible now, as this would imply modifying the model code so that it handles the embeddings to be of size `max_position_embeddings + 2`, which would break all current existing models. We could reach for an if/else statement analyzing `transformers` version, but this would be very error prone, and would needlessly complicate the code.\r\n\r\nWe should document two things:\r\n- The `max_position_embeddings` should not be used to create sequences, the tokenizer's `max_len` should be used instead. Since models and tokenizers work in pairs, it is not incoherent to have to rely on the tokenizer attribute to create model sequences.\r\n- We should document that models should try to respect, as much as possible, the convention that `model.max_position_embeddings == tokenizer.max_len` to prevent such confusion from happening in newer models.",
"Agree! I think we could however also change the default value of `max_position_embeddings` of `LongformerConfig` and `RobertaConfig` to 4096 and 512 - this should not break anything as people load their saved config via `from_pretrained(...)`.\r\n\r\nThe big advantage of doing so would be to somewhat prevent future models that built on top of Roberta/Longformer from having these wrong numbers.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: apex ddp
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> @patrickvonplaten maybe?
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. use apex ddp with longformerforsequenceclassification
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
my code snippet:
```python
def train(self):
self.model.train()
losses = []
if isinstance(self.train_loader.sampler, DistributedSampler):
self.train_loader.sampler.set_epoch(self.epoch)
for qids, dids, queries, documents, y in self.train_loader:
encoded = self._tokenizer.batch_encode_plus(batch_text_or_text_pairs=list(zip(queries, documents)),
truncation="longest_first", add_special_tokens=True,
max_length = self.max_len, padding="max_length",
is_pretokenized=False, return_tensors="pt",
return_attention_mask=True, return_token_type_ids=True)
input_ids = encoded["input_ids"].cuda()
attention_mask = encoded["attention_mask"].cuda()
token_type_ids = encoded["token_type_ids"].cuda()
y = torch.tensor(y).unsqueeze(1).cuda()
global_attention_mask = self.get_global_attention(encoded["input_ids"], self.max_len, self._tokenizer.sep_token_id)[0].cuda()
self.optimizer.zero_grad()
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
global_attention_mask=global_attention_mask,
labels=y
)
loss = outputs[0]
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
self.optimizer.step()
```
Where the data are queries and documents that are either relevant (y=1) or irrelevant (y=0). Each input is the concatenation of a query and a document. ```get_global_attention()``` is a function to give global attention to query tokens.
I find that for some batches (no all batches!), the code would give the following errors, which are very confusing to me:
```
INFO:__main__:Namespace(apex_level='O2', batch_size=1, cased=1, debug=0, encoder_lr=1e-05, eval_step=1, finetune_embedding=0, local_rank=0, model_path='allenai/longformer-base-4096', model_type='longformer', num_epochs=20, num_ft_encoders=2, num_neg=1, projector_lr=1e-05, seed=611)
Some weights of the model checkpoint at allenai/longformer-base-4096 were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
- This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForSequenceClassification were not initialized from the model checkpoint at allenai/longformer-base-4096 and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:__main__:Reading data from /....../sampled
INFO:root:Number of positive query-document pairs in [train] set: 67
INFO:root:Number of labelled query-document pairs in [dev] set: 2000
INFO:root:Number of labelled query-document pairs in [test] set: 2000
INFO:__main__:Data reading done ...
INFO:__main__:adding 10-th encoder to optimizer...
INFO:__main__:adding 11-th encoder to optimizer...
Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
INFO:__main__:process[0]: training epoch 0 ...
/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/tokenization_utils.py:547: FutureWarning: `is_pretokenized` is deprecated and will be removed in a future version, use `is_split_into_words` instead.
warnings.warn(
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block:
...... (saving space)
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "finetune-marco.py", line 93, in <module>
marco.run()
File "/mnt/nfs/work1/allan/user/LF-for-IR/Marco.py", line 167, in run
self.train()
File "/mnt/nfs/work1/allan/user/LF-for-IR/Marco.py", line 223, in train
outputs = self.model(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/apex/amp/_initialize.py", line 196, in new_fwd
output = old_fwd(*applier(args, input_caster),
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/apex/parallel/distributed.py", line 560, in forward
result = self.module(*inputs, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1442, in forward
outputs = self.longformer(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1262, in forward
encoder_outputs = self.encoder(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 903, in forward
layer_outputs = layer_module(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 849, in forward
self_attn_outputs = self.attention(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 793, in forward
self_outputs = self.self(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 246, in forward
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
NCCL error in: /opt/conda/conda-bld/pytorch_1591914886554/work/torch/lib/c10d/../c10d/NCCLUtils.hpp:69, unhandled cuda error, NCCL version 2.4.8
Traceback (most recent call last):
File "/home/user/miniconda3/envs/marco/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/user/miniconda3/envs/marco/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/distributed/launch.py", line 258, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/home/user/miniconda3/envs/marco/bin/python', '-u', 'finetune-marco.py', '--local_rank=0', '--model_type', 'longformer', '--model_path', 'allenai/longformer-base-4096', '--batch_size', '1', '--finetune_embedding', '0', '--cased', '1', '--num_neg', '1', '--eval_step', '1', '--num_epochs', '20', '--apex_level', 'O2', '--encoder_lr', '1e-5', '--projector_lr', '1e-5', '--num_ft_encoders', '2', '--seed', '611']' died with <Signals.SIGABRT: 6>.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think this error is uncalled for. I have tried our pretrained models like base BERT models and they ran just fine. Can someone help interpret the error message here? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8077/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8077/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8076/comments | https://api.github.com/repos/huggingface/transformers/issues/8076/events | https://github.com/huggingface/transformers/pull/8076 | 730,038,668 | MDExOlB1bGxSZXF1ZXN0NTEwNDE0Nzg1 | 8,076 | [setup] update/add setup targets | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Guys, since this is not code but a bunch of definitions please make direct suggestions that can be merged. Thank you. \r\n\r\nI originally just wanted to add `docs` and `flax` to `dev` and had no idea about all the other needs.",
"I can't make a suggestion that moves stuff around in the file. I can push a commit to your branch if you want, but that's the best I can do.",
"I understand. I just don't know to work in this fashion. If you give me a spec I can work to implement it. Otherwise let's just merge this and subsequent PRs can do further improvements. ",
"Sure, I'll add my comments in a separate PR. Thanks for doing this one!\r\n"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | updating pip targets
* [x] adds `tokenizer`
* [x] adds `docs` to `dev`, since we need to have the tools to run `make docs`
* [x] adds `flax` to `dev`, since we need to have the libs to run flax tests - except when on windows - it skips it then
* [x] brings `all` up-to-date
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8076/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8076",
"html_url": "https://github.com/huggingface/transformers/pull/8076",
"diff_url": "https://github.com/huggingface/transformers/pull/8076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8076.patch",
"merged_at": 1603821297000
} |
https://api.github.com/repos/huggingface/transformers/issues/8075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8075/comments | https://api.github.com/repos/huggingface/transformers/issues/8075/events | https://github.com/huggingface/transformers/pull/8075 | 730,033,917 | MDExOlB1bGxSZXF1ZXN0NTEwNDExMDAw | 8,075 | Create README.md | {
"login": "gurkan08",
"id": 33202187,
"node_id": "MDQ6VXNlcjMzMjAyMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/33202187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gurkan08",
"html_url": "https://github.com/gurkan08",
"followers_url": "https://api.github.com/users/gurkan08/followers",
"following_url": "https://api.github.com/users/gurkan08/following{/other_user}",
"gists_url": "https://api.github.com/users/gurkan08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gurkan08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gurkan08/subscriptions",
"organizations_url": "https://api.github.com/users/gurkan08/orgs",
"repos_url": "https://api.github.com/users/gurkan08/repos",
"events_url": "https://api.github.com/users/gurkan08/events{/privacy}",
"received_events_url": "https://api.github.com/users/gurkan08/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"README.md file added for gurkan08/bert-turkish-text-classification model"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8075/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8075",
"html_url": "https://github.com/huggingface/transformers/pull/8075",
"diff_url": "https://github.com/huggingface/transformers/pull/8075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8075.patch",
"merged_at": 1603974154000
} |
https://api.github.com/repos/huggingface/transformers/issues/8074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8074/comments | https://api.github.com/repos/huggingface/transformers/issues/8074/events | https://github.com/huggingface/transformers/pull/8074 | 729,999,858 | MDExOlB1bGxSZXF1ZXN0NTEwMzgzNzA4 | 8,074 | Doc styling fixes | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR fixes a few of the docstrings left in a bad state by #8067 and a small bug in the styling script (it was matching lines of the form `.. note::` or `.. warning::` to kind of examples inside docstrings.
With this, everything should now be fine. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8074/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8074",
"html_url": "https://github.com/huggingface/transformers/pull/8074",
"diff_url": "https://github.com/huggingface/transformers/pull/8074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8074.patch",
"merged_at": 1603799690000
} |
https://api.github.com/repos/huggingface/transformers/issues/8073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8073/comments | https://api.github.com/repos/huggingface/transformers/issues/8073/events | https://github.com/huggingface/transformers/pull/8073 | 729,956,665 | MDExOlB1bGxSZXF1ZXN0NTEwMzQ3NjYz | 8,073 | [breaking|pipelines|tokenizers] Adding slow-fast tokenizers equivalence tests pipelines - Removing sentencepiece as a required dependency | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"On hold for now until we update the input/output of the alignement methods in `tokenizers` to handle pairs of input sentences following internal discussion with @n1t0 and @Narsil.",
"Remaining error on the CI (NER pipeline not working for slow tokenizers) should be solved by #8364 \r\n\r\nEdit: ok solved now that #8364 is merged",
"examples/seq2seq/finetune.py is failing after this PR:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=0 pytest -sv examples/seq2seq/test_seq2seq_examples.py::TestTheRest::test_finetune_0_patrickvonplaten_t5_tiny_random\r\n```\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 442, in <module>\r\n main(args)\r\n File \"finetune.py\", line 409, in main\r\n trainer: pl.Trainer = generic_train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/lightning_base.py\", line 398, in generic_train\r\n trainer.fit(model)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 444, in fit\r\n results = self.accelerator_backend.train()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 63, in train\r\n results = self.train_or_test()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 74, in train_or_test\r\n results = self.trainer.train()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 493, in train\r\n self.train_loop.run_training_epoch()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 554, in run_training_epoch\r\n for batch_idx, (batch, is_last_batch) in train_dataloader:\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/profiler/profilers.py\", line 80, in profile_iterable\r\n value = next(iterator)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py\", line 46, in _with_is_last\r\n last = next(it)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 519, in __next__\r\n data = self._next_data()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 1169, in _next_data\r\n return self._process_data(data)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 1195, in _process_data\r\n data.reraise()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/_utils.py\", line 428, in reraise\r\n raise self.exc_type(msg)\r\nAttributeError: Caught AttributeError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py\", line 202, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n return self.collate_fn(data)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/utils.py\", line 251, in collate_fn\r\n batch_encoding: Dict[str, torch.Tensor] = self.tokenizer.prepare_seq2seq_batch(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/tokenization_bart_fast.py\", line 127, in prepare_seq2seq_batch\r\n model_inputs: BatchEncoding = self(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 2319, in __call__\r\n return self.batch_encode_plus(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 2504, in batch_encode_plus\r\n return self._batch_encode_plus(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2_fast.py\", line 167, in _batch_encode_plus\r\n return super()._batch_encode_plus(*args, **kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py\", line 433, in _batch_encode_plus\r\n return BatchEncoding(sanitized_tokens, sanitized_encodings, tensor_type=return_tensors)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 242, in __init__\r\n n_sequences = encoding[0].n_sequences\r\nAttributeError: 'tokenizers.Encoding' object has no attribute 'n_sequences'\r\n\r\nException ignored in: <function tqdm.__del__ at 0x7f60b55d7b80>\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1128, in __del__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1341, in close\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1520, in display\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1131, in __repr__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1481, in format_dict\r\nTypeError: cannot unpack non-iterable NoneType object\r\n```",
"Hi @stas00, it seems you don't have the latest tokenizer version!",
"Bummer! Thank you for identifying that, @LysandreJik - I confirm updating `tokenizers` fixes it.\r\n\r\nSince this happened more than once now, would it be possible to add and maintain run-time checks as we have done here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/examples/lightning_base.py#L38-L47\r\n\r\nHow a developer is to know that they need to update a dependency of a project otherwise? `git pull` doesn't trigger `pip install -e \".[dev]\"`\r\n\r\nExcept the code above should emit an error on failed check. It should be an error in `lightning_base.py` too, but I couldn't convince @sshleifer to make it so. Warnings are plentiful and this approach of using a warning just doesn't serve its purpose, IMHO.\r\n",
"I like the function (though we can expand its name and replace `ver` by `version` inside to make it more readable ;-) ) and I think we could have a dynamic check at init. I don't see any problem with throwing an error if the minimum version of tokenizers isn't installed, but maybe @LysandreJik or @patrickvonplaten have a different opinion.",
"Oh, any name works, it was just code first, then I thought that it'll eventually end up being a reusable function, so the half-baked function emerged.\r\n\r\nI'm glad you agree that it should assert if the minimal requirements aren't met.\r\n\r\nTechnically we should then add `packaging` and `pkg_resources` to the project dependencies, but since they are prerequisites for `setuptools` - every user should already have them.\r\n\r\nAnd we will need 2 versions of errors: \r\n* `Try: pip install -r examples/requirements.txt` for examples, \r\n* `Try: pip install transformers -U\"` for the core.",
"Agree! I actually also already ran into the same issue @stas00 -> I'm definitely in favor of such a function! Should we run the function at `import transformers` already ? ",
"So did I, and I thought everything was broken until @sgugger showed me the way. Your function would definitely help in that regard @stas00 :)",
"> Should we run the function at `import transformers` already ?\r\n\r\nTwo ways I can see:\r\n\r\n1. right where the corresponding import is done - so it's easier to see the specific requirement - but it could be scattered - but it could be more difficult to maintain. Ideally, python would have `import module 1.2.3`, like some other languages have, but it doesn't at the moment.\r\n2. all in one place, at the very top of `__init__.py`, all requirements next to each other so it's easy to maintain. This check relies on packaging tools - i.e. derives it from the `site-packages` dir, so it shouldn't even load the module. i.e. we don't try to look for `xxx.__version__` here, since not all packages have it.\r\n\r\nI'd say (2) is the easier way.\r\n\r\nLast night I was dreaming of a trigger feature, where if git sees `setup.py` modified it'd alert someone to update `__init__.py` requirements - but it was a dream.\r\n",
"I think it can be in any file imported by the `__init__` (as long as the function is executed), so we could also have this in `file_utils.py`. Though the `__init__` is fine by me too if you think it's better.",
"Sure, then let's add a dedicated module then? It'd be the most simple/intuitive then\r\n\r\n```\r\n$ cat version_requirements.py\r\nrequire(\"tokenizers\", \"1.2.3\")\r\n...\r\n```\r\nand `__init__.py`:\r\n```\r\nimport .version_requirements\r\n```",
"hear, hear! if we have such a file then we can even use it to feed `setup.py`! so we have a single place where we edit all the minimal version requirements and don't need to touch `setup.py` and potentially forget to sync requirements.\r\n\r\nIn which case I'd use a dict and feed it to `require_version` (or whatever we end up calling it).\r\n\r\nClearly setup has a lot of optional things, so perhaps then we load this file at the end of __init__ and only check versions for the things that got loaded?\r\n\r\nor we just test only the package names that we know we need to check, but use that dict for setup.py's needs.\r\n\r\nLet me know if these ideas are an overkill.",
"Here is a quick prototype to what I'm thinking:\r\n```\r\n$ cat src/transformers/version_requirements.py\r\nmin_vers = dict(\r\n tokenizers: \"==0.9.4\",\r\n tqdm: \">=4.27\",\r\n jaxlib: \"==0.1.55\",\r\n)\r\nrun_time_keys = \"tokenizers tqdm\".split()\r\nfor k in run_time_keys:\r\n require_min_ver(k, min_vers[k])\r\n\r\n$ cat setup.py\r\nfrom version_requirements import min_vers\r\n# of course we won't hardcode each entry - this is a just to demonstrate\r\nextras[\"flax\"] = [f\"jaxlib{min_vers{'jax_lib']}\", ...\r\n\r\n```\r\n\r\nso you can see the dictionary has all the versions, but we actively check only the versions that are non-optional.\r\n",
"One downside of this is that it would move dependencies out of the setup.py (which is where people would expect to see them). Do you think there is a way to structure this so the one place we look at minimum version is the setup? It would be less surprising I think. ",
"I agree.\r\n\r\nWe could have all the version requirements defined in `setup.py` and when it's run it'd update `src/transformers/version_requirements.py` instead. Then we would actually want 2 files under transformers - one that `setup.py` will maintain - it will be just a dict dump - so that it could overwrite the file completely and another for the selective run-time checks that would refer to the first file generated by setup, since we will only check a handful of these many dependencies at run time.\r\n\r\n```\r\n$ cat setup.py\r\nmin_vers = dict(\r\n tokenizers: \"==0.9.4\",\r\n tqdm: \">=4.27\",\r\n jaxlib: \"==0.1.55\",\r\n)\r\n\r\n# add code to dump min_vers dict into `src/transformers/version_requirements.py`\r\n\r\n# of course we won't hardcode each entry - this is a just to demonstrate\r\nextras[\"flax\"] = [f\"jaxlib{min_vers{'jax_lib']}\", ...\r\n\r\n\r\n$ cat src/transformers/version_requirements.py\r\n# AUTOGENERATED - MODIFY setup.py INSTEAD! #\r\nmin_vers = dict(\r\n tokenizers: \"==0.9.4\",\r\n tqdm: \">=4.27\",\r\n jaxlib: \"==0.1.55\",\r\n)\r\n\r\n$ cat src/transformers/version_run_time_check.py\r\nfrom .version_requirements import min_vers\r\n\r\n# define which module versions we always want to check (only a few)\r\nrun_time_keys = \"tokenizers tqdm\".split()\r\nfor k in run_time_keys:\r\n require_min_ver(k, min_vers[k])\r\n\r\n$ cat src/transformers/__init__.py\r\nimport .version_run_time_check\r\n```\r\n\r\nThis is of course all just a visual prototype.\r\n",
"If you want to tackle this, please go ahead with something along these lines. We can refine more on an actual PR.",
"OK, I will make a partial sub-set of modules and when you like how it looks expand it to all modules.",
"Is there any way to workaround the version check?\r\nI want to use some features from `tokenizers 0.10`, but Transformers raise `VersionConflict`.\r\n\r\nSurely, this can cause some very non-obvious bugs, but at least I'll be able to work with my code before the new version of Transformers is released.",
"I think this is a very reasonable need, @Guitaricet. But it's probably best to discuss it in a dedicated issue. Could you please file a [feature request](https://github.com/huggingface/transformers/issues/new/choose) and let's see what others would think?\r\n\r\nI'd say an env var to override the checks should do the trick. Should be easy to add if the others agree with having it."
] | 1,603 | 1,611 | 1,605 | MEMBER | null | # What does this PR do?
**Breaking**: Auto-tokenizers and pipelines:
- switch to `use_fast=True` by default (Fast tokenizers by default)
=> The main expected breaking change is **the handling of overflowing tokens** which is different between slow and fast tokenizers.
- removing sentencepiece from the required dependencies (in some special case this may require you to install `sentencepiece` in addition to the normal install).
Pipelines:
- Add slow/fast tokenizers equivalence tests in pipelines
- upgrade QA/NER processing pipeline to handle fast tokenizers
- remove `test_pipelines_dialog.py` which was a duplicated test file
Tokenizers:
- Update and add new alignement method in `BatchEncoding`
Dependencies:
- upgrade to tokenizers==0.9.4 to allow QA processing with fast tokenizers
- remove sentencepiece from the required dependencies
Misc:
- Fix bug in RobertaFast and test for XLM-Prophetnet and RAG
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8073/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8073",
"html_url": "https://github.com/huggingface/transformers/pull/8073",
"diff_url": "https://github.com/huggingface/transformers/pull/8073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8073.patch",
"merged_at": 1605477059000
} |
https://api.github.com/repos/huggingface/transformers/issues/8072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8072/comments | https://api.github.com/repos/huggingface/transformers/issues/8072/events | https://github.com/huggingface/transformers/issues/8072 | 729,955,048 | MDU6SXNzdWU3Mjk5NTUwNDg= | 8,072 | `BartForConditionalGeneration.from_pretrained` suddenly fails | {
"login": "danieldeutsch",
"id": 6633709,
"node_id": "MDQ6VXNlcjY2MzM3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6633709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danieldeutsch",
"html_url": "https://github.com/danieldeutsch",
"followers_url": "https://api.github.com/users/danieldeutsch/followers",
"following_url": "https://api.github.com/users/danieldeutsch/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldeutsch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danieldeutsch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldeutsch/subscriptions",
"organizations_url": "https://api.github.com/users/danieldeutsch/orgs",
"repos_url": "https://api.github.com/users/danieldeutsch/repos",
"events_url": "https://api.github.com/users/danieldeutsch/events{/privacy}",
"received_events_url": "https://api.github.com/users/danieldeutsch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I am pretty sure my guess at what happened was correct.\r\n\r\nOn one of my machines, I have the bart-large/config.json from October 12 with etag \"40bd49bcec9d93d8b0bfbd020088e2e1b6e6bb03e8e80aa5144638f90ca6bd61\" and it is 1.26kb. It contains an entry `\"output_past\": false`. Today, there is a file with a new etag \"8b65d3b9a47e96c1909d807f7e7f41dd1ed95092b139965be7b914aa4fb5fd08\" and it is 1.52kb. It does not contain any `output_past` entry.\r\n\r\nWhat could I have done to prevent this from happening? Is there a way to specific a specific version of the models?",
"I can confirm your diagnostic is correct.\r\n\r\nYou can check the diff between the two versions with:\r\n```\r\ncurl -i https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large/config.json?versionId=PFecmBwmg83YUwpv_kkc3kBzoCGebvu7\r\ncurl -i https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large/config.json?versionId=JhIFsOvvLtrLn0vJjGNN6ZhJGUlbXBEP\r\n```\r\n\r\nI'll let @sshleifer and @patrickvonplaten chime in about the actual file change, but to answer your last question:\r\n\r\n> What could I have done to prevent this from happening? Is there a way to specific a specific version of the models?\r\n\r\nWe will roll out a way to specify specific versions of models in the near future.",
"I mistakenly changed the `config`, my fault.\r\nCan you pass `output_past` to `__init__` or do you need me to add back the `output_past` key?",
"I was able to just remove the flag and it works with the updated config. To be honest, I don't know what the flag did -- I modified someone else's model which used it.\r\n\r\nThanks for looking into this. This issue can be closed"
] | 1,603 | 1,603 | 1,603 | NONE | null | I have been using the same `BartForConditionalGeneration` model and `transformers==3.0.2` for weeks, but today the same code threw a new error that has never happened before. It says I am passing an unexpected `output_past` parameter to `from_pretrained`. I am loading the `facebook/bart-large` model.
The line throwing the error is `BartForConditionalGeneration.from_pretrained("facebook/bart-large", output_past=True)`
```
/usr/local/lib/python3.6/dist-packages/qaeval/generation/model.py in __init__(self, vocab, model_name, max_decoding_steps, beam_size)
59 beam_size: int = 4) -> None:
60 super().__init__(vocab)
---> 61 self.bart = BartForConditionalGeneration.from_pretrained(model_name, output_past=True)
62 self.tokenizer = PretrainedTransformerTokenizer(model_name)
63
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
670
671 # Instantiate model.
--> 672 model = cls(config, *model_args, **model_kwargs)
673
674 if state_dict is None and not from_tf:
TypeError: __init__() got an unexpected keyword argument 'output_past'
```
I am running this code in a Jupyter Notebook. This morning the code ran (I have all of the output saved). When I make a copy of the notebook and rerun it now without making any changes, it fails with the above error. I see that the `output_past` parameter was removed [here](https://github.com/huggingface/transformers/pull/3632/files/904b387af42744f9141a6dc4be698a5815ce5bbd), but that does not explain why it was working up until just a few hours ago.
I can see in the saved output between the two notebooks that one of the files that transformers first downloads when loading the model used to be 1.26kb in size, but now it's 1.52k. I assume this is the config file for `facebook/bart-large`.
Did anything change to that file within the past few hours? I don't know where to look for the copy of that file to see what happened. I am quite perplexed by this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8071/comments | https://api.github.com/repos/huggingface/transformers/issues/8071/events | https://github.com/huggingface/transformers/pull/8071 | 729,894,957 | MDExOlB1bGxSZXF1ZXN0NTEwMjk4NjI3 | 8,071 | [All Seq2Seq model + CLM models that can be used with EncoderDecoder] Add cross-attention weights to outputs | {
"login": "ysgit",
"id": 898918,
"node_id": "MDQ6VXNlcjg5ODkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/898918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysgit",
"html_url": "https://github.com/ysgit",
"followers_url": "https://api.github.com/users/ysgit/followers",
"following_url": "https://api.github.com/users/ysgit/following{/other_user}",
"gists_url": "https://api.github.com/users/ysgit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysgit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysgit/subscriptions",
"organizations_url": "https://api.github.com/users/ysgit/orgs",
"repos_url": "https://api.github.com/users/ysgit/repos",
"events_url": "https://api.github.com/users/ysgit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysgit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @ysgit - I like the idea! However, I think we should actually create a new output tuple for the cross-attention (called `cross_attention` when `return_dict`=True). For this we will have to create new ModelOutputs for Bert, RoBERTa, etc...\r\n",
"@patrickvonplaten that makes sense, I guess I suspected that might be the reaction, I'll see if that's something I can manage although I'm a little hampered by not being able to get the test suite to successfully run locally",
"@patrickvonplaten I have done as you suggested and separated cross attentions into a new variable in the output. please take a look and let me know what you think. many thanks!",
"In a future PR or a \"Good First Issue\", we could add this functionality to the TFSeq2Seq models as well, but I'm a bit hesitant given the current problems with the `output_attentions` flag in TF.\r\nWhat do you think @sgugger @LysandreJik @jplu ?",
"This should be easier to integrate in TF in the next release :)",
"Thanks everyone!"
] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
This PR causes models that support cross-attention to output the cross-attention tensors as well as the self-attention tensors when output_attentions is set
(from @patrickvonplaten)
This PR adds cross-attention outptus to all Seq2Seq models and to CLM models compatible with the `EncoderDecoderModel` framework.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @patrickvonplaten @Bharat123rox
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8071/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8071",
"html_url": "https://github.com/huggingface/transformers/pull/8071",
"diff_url": "https://github.com/huggingface/transformers/pull/8071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8071.patch",
"merged_at": 1604687689000
} |
https://api.github.com/repos/huggingface/transformers/issues/8070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8070/comments | https://api.github.com/repos/huggingface/transformers/issues/8070/events | https://github.com/huggingface/transformers/issues/8070 | 729,887,717 | MDU6SXNzdWU3Mjk4ODc3MTc= | 8,070 | Pretraining for encoder of TF T5 model | {
"login": "dharakotecha",
"id": 36985419,
"node_id": "MDQ6VXNlcjM2OTg1NDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/36985419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dharakotecha",
"html_url": "https://github.com/dharakotecha",
"followers_url": "https://api.github.com/users/dharakotecha/followers",
"following_url": "https://api.github.com/users/dharakotecha/following{/other_user}",
"gists_url": "https://api.github.com/users/dharakotecha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dharakotecha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dharakotecha/subscriptions",
"organizations_url": "https://api.github.com/users/dharakotecha/orgs",
"repos_url": "https://api.github.com/users/dharakotecha/repos",
"events_url": "https://api.github.com/users/dharakotecha/events{/privacy}",
"received_events_url": "https://api.github.com/users/dharakotecha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@dharakotecha, you would need to create a MaskedLM head similar to distilbert. You would also need to either use one of the masekd token <extra_id_0> or add a new token for mask. \r\n\r\nSome experimentation on the t5 encoder shows that it does produce pretty reasonable output, but does not do MLM at the encoder side. The cloze/fill in the blank happens in the decoder side.\r\n\r\nThis is just quick and dirty. You would need to create a module similar to a MLM for distilbert with the loss, feeding in the output from the encoder into the projector:\r\n\r\n```\r\nt5 = AutoModel.from_pretrained('t5-small').cuda()\r\ntokenizer = AutoTokenizer.from_pretrained('t5-small')\r\nencoder = t5.encoder\r\nembeddings = encoder.embed_tokens\r\nconfig = t5.config\r\n#tied the embedding to the projector. \r\nvocab_projector = nn.Linear(config.d_model, config.vocab_size).cuda()\r\nvocab_projector.weight = embeddings.weightvocab_projector = nn.Linear(config.d_model, config.vocab_size).cuda()\r\nvocab_projector.bias.data = torch.nn.functional.pad(vocab_projector.bias.data, (0, vocab_projector.weight.shape[0] - vocab_projector.bias.shape[0],), \"constant\", 0)\r\n\r\ndef pred(predictions):\r\n for pred in predictions:\r\n print (\"**\")\r\n sorted_preds, sorted_idx = pred.sort(dim=-1, descending=True)\r\n ret = []\r\n for k in range(2):\r\n predicted_index = [sorted_idx[i, k].item() for i in range(0,len(predictions[0]))]\r\n predicted_token = ' '.join([tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,len(predictions[0]))]).replace('Ġ', ' ').replace(' ', ' ').replace('##', '')\r\n ret.append(predicted_token)\r\n return ret\r\n\r\n\r\ninput_txt = [\"</s>Lincoln was an American president and lawyer\"]\r\ninputs = tokenizer(input_txt, return_tensors='pt', add_special_tokens=True, padding=True)\r\npredictions = vocab_projector(encoder(inputs.input_ids.cuda())[0])\r\npred(predictions)\r\n\r\n\r\n```\r\nOutputs:\r\n['\\u2581Lincoln \\u2581was \\u2581psiho American \\u2581President & \\u2581lawyer </s>', '\\u2581senzati tais Mitglied \\u2581American \\u2581president \\u2581and \\u2581Lawyer \\u2581summarize']**\r\n\r\nBut, using the extra mask, will not infer the missing token.\r\n\r\n```\r\ninput_txt = [\"</s>Lincoln <extra_id_0> American president and lawyer\"]\r\ninputs = tokenizer(input_txt, return_tensors='pt', add_special_tokens=True, padding=True)\r\npredictions = vocab_projector(encoder(inputs.input_ids.cuda())[0])\r\npred(predictions)\r\n\r\n```\r\nWil output:\r\n['\\u2581Lincoln <extra_id_0> \\u2581American \\u2581president & \\u2581lawyer </s>', '\\u2581Abraham \\u2581botez American \\u2581President \\u2581and \\u2581Lawyer gasesc']**\r\n\r\nHope this helps,\r\n\r\n",
"I just realized that you asking about tensor flow too... I'm assuming you could do the similar thing but creating a MLM head in tesnor flow just for the encoder.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | NONE | null | Hi, for sequence to sequence task in tensorflow, I see that I can use TFT5 model. However, before sequence to sequence training, I need to perform masked language model pretraining on the encoder and initialise the weights of the encoder and decoder with the same weights that I obtain by masked language model pretraining. Is there a way to do this in the tensorflow version of T5?
I could do that with the EncoderDecoder, but it is only supported in PyTorch and not tensorflow. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8069/comments | https://api.github.com/repos/huggingface/transformers/issues/8069/events | https://github.com/huggingface/transformers/pull/8069 | 729,881,879 | MDExOlB1bGxSZXF1ZXN0NTEwMjg3OTg5 | 8,069 | DEP: pinned sentencepiece to 0.1.91 in setup.py to fix build issues with newer versions | {
"login": "jmwoloso",
"id": 7530947,
"node_id": "MDQ6VXNlcjc1MzA5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmwoloso",
"html_url": "https://github.com/jmwoloso",
"followers_url": "https://api.github.com/users/jmwoloso/followers",
"following_url": "https://api.github.com/users/jmwoloso/following{/other_user}",
"gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions",
"organizations_url": "https://api.github.com/users/jmwoloso/orgs",
"repos_url": "https://api.github.com/users/jmwoloso/repos",
"events_url": "https://api.github.com/users/jmwoloso/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmwoloso/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can check the other 3 boxes if needed. They didn't seem to apply to this particular PR so I left them unchecked.\r\n"
] | 1,603 | 1,604 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Pins `sentencepiece` to `0.1.91` to resolve build issues with newer versions
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/8020
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8069/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8069",
"html_url": "https://github.com/huggingface/transformers/pull/8069",
"diff_url": "https://github.com/huggingface/transformers/pull/8069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8069.patch",
"merged_at": 1603822172000
} |
https://api.github.com/repos/huggingface/transformers/issues/8068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8068/comments | https://api.github.com/repos/huggingface/transformers/issues/8068/events | https://github.com/huggingface/transformers/issues/8068 | 729,871,812 | MDU6SXNzdWU3Mjk4NzE4MTI= | 8,068 | seq2seq/finetune.py: remove useless check | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | CONTRIBUTOR | null | Removed on built-in already
```python
self.dataset_class = (
Seq2SeqDataset if hasattr(self.tokenizer, "prepare_seq2seq_batch") else LegacySeq2SeqDataset
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8068/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8067/comments | https://api.github.com/repos/huggingface/transformers/issues/8067/events | https://github.com/huggingface/transformers/pull/8067 | 729,846,425 | MDExOlB1bGxSZXF1ZXN0NTEwMjU5MDYw | 8,067 | Doc styling | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"A few loose-ends to tie, but that will be for tomorrow!"
] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR introduces a doc styling script and applies it to the repo. The styling script runs similarly to black, with a an option that fixes and overwrites the files (put inside `make style`) and an option that only checks if there should be a restyle, failing with an error if that's the case (put inside `make quality`).
The script is applied to all rst files inside `docs/source` and all py files inside `src/transformers`. It will look for paragraphs and always reorganize them to use the most of the `max_len` passed (set at 119 for the repo, like for the code). It will remove all duplicate or trailing whitespace, make all blank lines empty, ignore blocks of code/math and properly take care of the indentation.
A few extra things are performed:
- making the underline of the titles in rst to the `max_len` and always adding a blank line after those titles.
- unifying the format of the triple docstrings in the files
- always adding a new line before the beginning of a list (because sphinx sometimes complains otherwise)
To make the script ignore a string inside triple quotes (like warnings or long regex expressions), put a `# docstyle-ignore` somewhere before (it has to be between the previous triple quotes and the ones of the string you want to ignore).
In general, if the script reformats atrociously a docstring, it was because it was badly formatted. Adding a blank line to clearly mark paragraphs can make the script happier. Properly indenting list of arguments (see examples on any of the files of the lib) is also important to get good outputs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8067/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8067",
"html_url": "https://github.com/huggingface/transformers/pull/8067",
"diff_url": "https://github.com/huggingface/transformers/pull/8067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8067.patch",
"merged_at": 1603751163000
} |
https://api.github.com/repos/huggingface/transformers/issues/8066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8066/comments | https://api.github.com/repos/huggingface/transformers/issues/8066/events | https://github.com/huggingface/transformers/issues/8066 | 729,822,186 | MDU6SXNzdWU3Mjk4MjIxODY= | 8,066 | Missing Import | {
"login": "onaclov2000",
"id": 473412,
"node_id": "MDQ6VXNlcjQ3MzQxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/473412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/onaclov2000",
"html_url": "https://github.com/onaclov2000",
"followers_url": "https://api.github.com/users/onaclov2000/followers",
"following_url": "https://api.github.com/users/onaclov2000/following{/other_user}",
"gists_url": "https://api.github.com/users/onaclov2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/onaclov2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/onaclov2000/subscriptions",
"organizations_url": "https://api.github.com/users/onaclov2000/orgs",
"repos_url": "https://api.github.com/users/onaclov2000/repos",
"events_url": "https://api.github.com/users/onaclov2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/onaclov2000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Remember that the examples you pull from master need a [source install](https://huggingface.co/transformers/installation.html#installing-from-source). If you want the version that runs with the last release, you need to use that tag, here are the [examples for the last release](https://github.com/huggingface/transformers/releases/tag/v3.4.0) (v3.4.0).",
"You make a compelling argument :)\r\nI did a checkout of the v3.4.0 release, and besides dying from a CUDA out of memory, it appears to be working :)"
] | 1,603 | 1,603 | 1,603 | NONE | null | When trying to run this file:
https://github.com/huggingface/transformers/blob/3a10764574f252591eeaa5bbb10b778f623a4814/examples/language-modeling/run_language_modeling.py#L40
The following error occurs:
````
Traceback (most recent call last):
File "run_language_modeling.py", line 32, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForWholeWordMask'
````
I attempted to run the following:
````
import transformers
for i in dir(transformers):
if "data" in i.lower():
print (i)
````
And I got the following:
````
CsvPipelineDataFormat
DataCollator
DataCollatorForLanguageModeling
DataCollatorForNextSentencePrediction
DataCollatorForPermutationLanguageModeling
DataCollatorForSOP
DataCollatorWithPadding
DataProcessor
GlueDataTrainingArguments
GlueDataset
JsonPipelineDataFormat
LineByLineTextDataset
LineByLineWithSOPTextDataset
PipedPipelineDataFormat
PipelineDataFormat
SquadDataTrainingArguments
SquadDataset
TextDataset
TextDatasetForNextSentencePrediction
data
default_data_collator
is_datasets_available
````
It appears that the **DataCollatorForWholeWordMask** is not a part of transformers for some reason.
I commented it out and this one also appears to have an issue (I'll list all that complain at the bottom of this post and edit it as I find more)
At this point I commented the below imports out and it is running (well it's downloading one of the models I believe). I'll update if it fails/succeeds.
Missing Imports from transformers:
* DataCollatorForWholeWordMask
* LineByLineWithRefDataset
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8066/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8065/comments | https://api.github.com/repos/huggingface/transformers/issues/8065/events | https://github.com/huggingface/transformers/issues/8065 | 729,818,427 | MDU6SXNzdWU3Mjk4MTg0Mjc= | 8,065 | load 'microsoft/unilm-base-cased' failed | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello\r\nSame problem for us.\r\nDo you plan to investigate / fix it ?\r\nCheers\r\nPhilippe",
"The UniLM model has not been released in the library yet.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,614 | 1,610 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I use the following code in https://huggingface.co/microsoft/unilm-base-cased to load the model.
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/unilm-base-cased")
model = AutoModel.from_pretrained("microsoft/unilm-base-cased")
```
And I got the traceback like this
tokenizer = AutoTokenizer.from_pretrained("microsoft/unilm-base-cased")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\tokenization_auto.py", line 298, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\configuration_auto.py", line 341, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag
>>>
>>> model = AutoModel.from_pretrained("microsoft/unilm-base-cased")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_auto.py", line 623, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\configuration_auto.py", line 341, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8065/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8065/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8064/comments | https://api.github.com/repos/huggingface/transformers/issues/8064/events | https://github.com/huggingface/transformers/pull/8064 | 729,818,409 | MDExOlB1bGxSZXF1ZXN0NTEwMjM2MDUw | 8,064 | [QOL] PretrainedConfig.to_diff_dict(other_config) | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Didn't work. Will re-open when I have something better."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR allows users to compare two configs in a backwards compatible way.
### Solution
The following calls work:
```python
bart_large_config = BartConfig.from_pretrained('facebook/bart-large')
bart_base_config = BartConfig.from_pretrained('facebook/bart-base')
t5_config = T5Config.from_pretrained('t5-small')
bart_large_config.to_diff_dict() # unchanged
bart_large_config.to_diff_dict(bart_base_config) # compares configs
bart_large_config.to_diff_dict(t5_config) # can be across subtypes
bart_large_config.to_diff_dict(bart_base_config.to_dict()) # can be against dict
```
Adds test that outputs are reasonable.
### Problem
Current best way to compare configs is to define your own function. Here is the one I use. Also good for debugging conversion scripts:
```python
def dct_differences(dct_a, dct_b):
SENTINEL = '__MissingKey'
k1, k2 = set(dct_a), set(dct_b) # just the keys
deltas = []
for k in k1.union(k2):
vala, valb = dct_a.get(k, SENTINEL), dct_b.get(k, SENTINEL)
# TODO(SS): nested dicts? Maybe better to dump to json and compare (after sorting keys!)
if vala == valb:
if (vala == SENTINEL and valb == SENTINEL): raise AssertionError('Adversarial Sentinel Input!')
else:
deltas.append((k, vala, valb))
return deltas
bart_large_config = BartConfig.from_pretrained('facebook/bart-large')
bart_base_config = BartConfig.from_pretrained('facebook/bart-base')
delta = dct_differences(bart_large_config.to_dict(), bart_base_config.to_dict())
```
this implementation is almost as useful without breaking backwards compatibility. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8064",
"html_url": "https://github.com/huggingface/transformers/pull/8064",
"diff_url": "https://github.com/huggingface/transformers/pull/8064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8064.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8063/comments | https://api.github.com/repos/huggingface/transformers/issues/8063/events | https://github.com/huggingface/transformers/pull/8063 | 729,812,026 | MDExOlB1bGxSZXF1ZXN0NTEwMjMwODI3 | 8,063 | Fix TF training arguments instantiation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Check that pytorch is installed before checking the device type. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8063",
"html_url": "https://github.com/huggingface/transformers/pull/8063",
"diff_url": "https://github.com/huggingface/transformers/pull/8063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8063.patch",
"merged_at": 1603737566000
} |
https://api.github.com/repos/huggingface/transformers/issues/8062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8062/comments | https://api.github.com/repos/huggingface/transformers/issues/8062/events | https://github.com/huggingface/transformers/pull/8062 | 729,809,630 | MDExOlB1bGxSZXF1ZXN0NTEwMjI4ODgz | 8,062 | Add AzureML in integrations via dedicated callback | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Code changes pass tests @sgugger ! Thanks @alvarobartt for having a look at those also.\r\n\r\nI guess I need to take care of docs/source/main_classes/callback.rst as well to complete the checklist though?",
"Yes, if you could add a line to the rst file, that would be great!",
"You might need to rebase to have the latest script for doc formatting, which should do everything to make the CI happy with just `make style`. Let me know if you need help.",
"Sorry for the somewhat messy hacktoberfest @sgugger ! \r\nI wasn't so aware of rst idiosyncrasies, and ways to diagnose issues, so I struggled a bit. Should be in order now (bonus, I fixed a typo on the way https://github.com/huggingface/transformers/pull/8062/commits/286f20c0594c3d16c824963c24c8fb1bc1d43bc6)"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
I propose this PR to allow transformers to call AzureML logging using https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py
The intended behaviour is to enable `transformers` users to track metrics in the AzureML UI in this fashion

Contributors to https://github.com/microsoft/AzureML-BERT and folks @microsoft may well come up with a better implementation though!
I am glad to improve the following if reviewers like the idea, and update docs and tests if needed.
@reviewers feel free to add any suggestions as my contributions to transformers have been very limited so far :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes, check for detais https://discuss.huggingface.co/t/how-to-integrate-an-azuremlcallback-for-logging-in-azure/1713/4
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@julien-c is aware of this and @sgugger participated in the thread on forums above and implemented callbacks with https://github.com/huggingface/transformers/pull/7596
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8062/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8062",
"html_url": "https://github.com/huggingface/transformers/pull/8062",
"diff_url": "https://github.com/huggingface/transformers/pull/8062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8062.patch",
"merged_at": 1603822915000
} |
https://api.github.com/repos/huggingface/transformers/issues/8061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8061/comments | https://api.github.com/repos/huggingface/transformers/issues/8061/events | https://github.com/huggingface/transformers/pull/8061 | 729,797,847 | MDExOlB1bGxSZXF1ZXN0NTEwMjE5MjY3 | 8,061 | Doc fixes in preparation for the docstyle PR | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR fixes a few docstrings and adds the `# docstyle-ignore` marker where necessary in preparation for the big docstyle PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8061/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8061",
"html_url": "https://github.com/huggingface/transformers/pull/8061",
"diff_url": "https://github.com/huggingface/transformers/pull/8061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8061.patch",
"merged_at": 1603738869000
} |
https://api.github.com/repos/huggingface/transformers/issues/8060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8060/comments | https://api.github.com/repos/huggingface/transformers/issues/8060/events | https://github.com/huggingface/transformers/issues/8060 | 729,762,554 | MDU6SXNzdWU3Mjk3NjI1NTQ= | 8,060 | a multitude of deprecations for pytorch-1.7+ | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"?",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | CONTRIBUTOR | null | This is not urgent. There is a ton of deprecation warnings across many modules with pytorch-1.7+ and a few with python-3.8:
(I hard-wrapped the lines to avoid the need to scroll, but it makes somewhat harder to see the warnings):
```
src/transformers/modeling_deberta.py:18 src/transformers/modeling_deberta.py:18
src/transformers/modeling_deberta.py:18 src/transformers/modeling_deberta.py:18
src/transformers/modeling_deberta.py:18
src/transformers/modeling_deberta.py:18:
DeprecationWarning: Using or importing the ABCs from 'collections' instead of
from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop
working from collections import Sequence
tests/test_logging.py::HfArgumentParserTest::test_integration
tests/test_logging.py:40:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(msg)
tests/test_logging.py::HfArgumentParserTest::test_integration
tests/test_logging.py:48:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(msg)
tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_gpt2.py:164:
TracerWarning: Converting a tensor to a Python float might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! w = w / (float(v.size(-1)) ** 0.5)
tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_gpt2.py:169:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! mask = self.bias[:, :, ns - nd : ns, :ns]
tests/test_modeling_auto.py::AutoModelTest::test_from_identifier_from_model_type
tests/test_modeling_auto.py::AutoModelTest::test_from_pretrained_identifier
src/transformers/modeling_auto.py:821:
FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed
in a future version. Please use `AutoModelForCausalLM` for causal language
models, `AutoModelForMaskedLM` for masked language models and
`AutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn(
tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs
tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:432:
UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape.
This may consume a large amount of memory. warnings.warn(
tests/test_modeling_albert.py::AlbertModelTest::test_torchscript
tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_attentions
tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_albert.py:229:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_albert.py: 3 warnings tests/test_modeling_bert.py: 3
warnings tests/test_modeling_bert_generation.py: 3 warnings
tests/test_modeling_distilbert.py: 2 warnings tests/test_modeling_dpr.py: 3
warnings tests/test_modeling_flaubert.py: 3 warnings
tests/test_modeling_electra.py: 3 warnings tests/test_modeling_layoutlm.py: 3
warnings tests/test_modeling_roberta.py: 3 warnings tests/test_modeling_xlm.py:
3 warnings tests/test_modeling_xlnet.py: 3 warnings
src/transformers/modeling_utils.py:1670:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! input_tensor.shape == tensor_shape for input_tensor
in input_tensors
tests/test_modeling_bert_generation.py: 32 warnings
src/transformers/modeling_bert_generation.py:417:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn("If you want to use `BertGenerationDecoder` as a standalone, add
`is_decoder=True.`")
tests/test_modeling_bert.py::BertModelTest::test_torchscript
tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions
tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_dpr.py::DPRModelTest::test_torchscript
tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_attentions
tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bert.py:191:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:175:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if decoder_padding_mask is not None and
decoder_padding_mask.shape[1] > 1:
tests/test_modeling_bart.py: 3 warnings tests/test_modeling_flaubert.py: 3
warnings tests/test_modeling_fsmt.py: 3 warnings tests/test_modeling_roberta.py:
3 warnings tests/test_modeling_xlm.py: 3 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/functional.py:1836:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert padding_idx < weight.size(0), 'Padding_idx
must be within num_embeddings'
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:720:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert key_padding_mask is None or
key_padding_mask.shape == (bsz, src_len)
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:722:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_weights.size() == (bsz * self.num_heads,
tgt_len, src_len)
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:740:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_output.size() == (bsz * self.num_heads,
tgt_len, self.head_dim)
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:287:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if torch.isinf(x).any() or torch.isnan(x).any():
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:1190:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! if len(torch.unique(eos_mask.sum(1))) > 1:
tests/test_modeling_common.py::UtilsFunctionsTest::test_top_k_top_p_filtering
tests/test_modeling_common.py:1196:
UserWarning: This overload of nonzero is deprecated: nonzero() Consider using
one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered
internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
non_inf_idx = (output != -float("inf")).nonzero().to(device=torch_device)
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_attentions
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_hidden_state
src/transformers/modeling_bert_generation.py:156:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_flaubert.py: 14 warnings tests/test_modeling_xlm.py: 14
warnings
src/transformers/modeling_xlm.py:1220:
FutureWarning: The `lengths` parameter cannot be used with the XLM multiple
choice models. Please use the attention mask instead. warnings.warn(
tests/test_modeling_flaubert.py::FlaubertModelTest::test_flaubert_lm_head
tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
tests/test_modeling_xlm.py::XLMModelTest::test_model_outputs_equivalence
tests/test_modeling_xlm.py::XLMModelTest::test_xlm_lm_head
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/_reduction.py:14:
UserWarning: reduction='elementwise_mean' is deprecated, please use
reduction='mean' instead. warnings.warn("reduction='elementwise_mean' is
deprecated, please use reduction='mean' instead.")
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_flaubert.py:188:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.size(0) == bs
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_flaubert.py:189:
TracerWarning: Converting a tensor to a Python number might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_flaubert.py:189:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:95:
TracerWarning: Converting a tensor to a Python number might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:95:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:106:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert mask.size() == (bs, slen)
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:1224:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if max_pos > self.weight.size(0):
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:763:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert embed_dim == self.embed_dim
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:764:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert list(query.size()) == [tgt_len, bsz,
embed_dim]
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:805:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_weights.size() == (bsz * self.num_heads,
tgt_len, src_len)
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:814:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert key_padding_mask is None or
key_padding_mask.size()[:2] == (
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:833:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_output.size() == (bsz * self.num_heads,
tgt_len, self.head_dim)
tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_model_att_mask_past
tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_model_past
tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_model_past_large_inputs
src/transformers/modeling_gpt2.py:530:
FutureWarning: The `past` argument is deprecated and will be removed in a future
version, use `past_key_values` instead. warnings.warn(
tests/test_modeling_electra.py::ElectraModelTest::test_torchscript
tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_attentions
tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_electra.py:180:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript
tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_attentions
tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_layoutlm.py:87:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/tensor.py:547:
TracerWarning: torch.tensor results are registered as constants in the trace.
You can safely ignore this warning if you use this function to create tensors
out of constant variables that would be the same every time you call this
function. In any other case, this might cause the trace to be incorrect. return
torch.tensor(other, dtype=dtype, device=self.device) ** self
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:314:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! num_remove = shift * len(pooled_pos)
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:638:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! pooling_flag = pooling_flag and block_index > 0
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:481:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! shift = 2 if q_head.shape[1] != context_len else 1
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:431:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! positional_attn = positional_attn[..., :context_len]
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:678:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! output = output[:, : target_len - 1]
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_gpt2.py:1058:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! pooled_logits = logits[range(batch_size),
sequence_lengths]
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:467:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[None, :
input_shape[-1]]
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:180:
TracerWarning: Converting a tensor to a Python float might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! w = w / math.sqrt(v.size(-1))
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:183:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! b = self.bias[:, :, : w.size(-2), : w.size(-1)]
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:823:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! pooled_logits = logits[range(batch_size),
sequence_lengths]
tests/test_modeling_rag.py: 12 warnings tests/test_retrieval_rag.py: 1 warning
src/transformers/tokenization_utils_base.py:613:
UserWarning: To copy construct from a tensor, it is recommended to use
sourceTensor.clone().detach() or
sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor). tensor = as_tensor(value)
tests/test_modeling_reformer.py: 58 warnings tests/test_modeling_transfo_xl.py:
18 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/modules/container.py:434:
UserWarning: Setting attributes on ParameterList is not supported.
warnings.warn("Setting attributes on ParameterList is not supported.")
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_mobilebert.py:192:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_mobilebert.py:534:
TracerWarning: torch.tensor results are registered as constants in the trace.
You can safely ignore this warning if you use this function to create tensors
out of constant variables that would be the same every time you call this
function. In any other case, this might cause the trace to be incorrect.
torch.tensor(1000),
tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_reformer_cached_inference
src/transformers/modeling_reformer.py:899:
UserWarning: This overload of nonzero is deprecated: nonzero() Consider using
one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered
internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
relevant_bucket_idx = (bucket_idx == (bucket_idx.shape[-1] - 1)).nonzero()
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_attentions
tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_utils.py:244:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]:
tests/test_modeling_t5.py: 95 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/onnx/utils.py:760:
DeprecationWarning: an integer is required (got type
torch._C._onnx.TensorProtoDataType). Implicit conversion to integers using
__int__ is deprecated, and may be removed in a future version of Python.
return getattr(node, kind + "_")(name, value)
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:1638:
DeprecationWarning: an integer is required (got type float). Implicit conversion
to integers using __int__ is deprecated, and may be removed in a future version
of Python. value_t=torch.tensor([fill_value],
dtype=sym_help.scalar_type_to_pytorch_type[dtype]))
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_identifier_from_model_type
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_pretrained_identifier
src/transformers/modeling_tf_auto.py:697:
FutureWarning: The class `TFAutoModelWithLMHead` is deprecated and will be
removed in a future version. Please use `TFAutoModelForCausalLM` for causal
language models, `TFAutoModelForMaskedLM` for masked language models and
`TFAutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn(
tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_attentions
tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_squeezebert.py:78:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_tf_flaubert.py: 9 warnings tests/test_modeling_tf_xlm.py: 9
warnings
src/transformers/modeling_tf_xlm.py:994:
FutureWarning: The `lengths` parameter cannot be used with the XLM multiple
choice models. Please use the attention mask instead. warnings.warn(
tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_graph_mode
tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_graph_mode
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:493:
FutureWarning: The `lengths` parameter cannot be used with the XLM multiple
choice models. Please use the attention mask instead. return
py_builtins.overload_of(f)(*args)
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_config
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_keras_save_load
tests/test_modeling_xlnet.py::XLNetModelTest::test_config
tests/test_modeling_xlnet.py::XLNetModelTest::test_correct_missing_keys
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_save_load
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_train_pipeline_custom_model
tests/test_modeling_xlnet.py::XLNetModelTest::test_save_load
src/transformers/configuration_xlnet.py:205:
FutureWarning: This config doesn't use attention memories, a core feature of
XLNet. Consider setting `mem_len` to a non-zero value, for example `xlnet =
XLNetLMHeadModel.from_pretrained('xlnet-base-cased'', mem_len=1024)`, for
accurate training performance as well as an order of magnitude faster inference.
Starting from version 3.5.0, the default parameter will be 1024, following the
implementation in https://arxiv.org/abs/1906.08237 warnings.warn(
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:531:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.size(0) == bs
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:532:
TracerWarning: Converting a tensor to a Python number might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:532:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:546:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :slen]
tests/test_optimization.py::OptimizationTest::test_adafactor
src/transformers/optimization.py:512:
UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor
other) Consider using one of the following signatures instead: add_(Tensor
other, *, Number alpha) (Triggered internally at
/pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq.mul_(beta2t).add_(1.0 - beta2t, update)
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:247:
UserWarning: To get the last learning rate computed by the scheduler, please
use `get_last_lr()`. warnings.warn("To get the last learning rate computed by
the scheduler, "
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131:
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`.
In PyTorch 1.1.0 and later, you should call them in the opposite order:
`optimizer.step()` before `lr_scheduler.step()`. Failure to do this will
result in PyTorch skipping the first value of the learning rate schedule. See
more details at
https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before
`optimizer.step()`. "
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:216:
UserWarning: Please also save or load the state of the optimizer when saving
or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning)
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:234:
UserWarning: Please also save or load the state of the optimizer when saving
or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning)
tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_with_correct_config
tests/test_tokenization_mbart.py::MBartEnroIntegrationTest::test_batch_fairseq_parity
tests/test_tokenization_t5.py::T5TokenizationTest::test_empty_target_text
tests/test_tokenization_t5.py::T5TokenizationTest::test_eos_in_input
tests/test_tokenization_t5.py::T5TokenizationTest::test_max_target_length
tests/test_tokenization_t5.py::T5TokenizationTest::test_outputs_not_longer_than_maxlen
tests/test_tokenization_t5.py::T5TokenizationTest::test_prepare_seq2seq_batch
src/transformers/tokenization_utils_base.py:1421:
FutureWarning: The `max_len` attribute has been deprecated and will be removed
in a future version, use `model_max_length` instead. warnings.warn(
tests/test_tokenization_albert.py: 2 warnings tests/test_tokenization_bart.py: 2
warnings tests/test_tokenization_bert.py: 2 warnings
tests/test_tokenization_bert_generation.py: 1 warning
tests/test_tokenization_bertweet.py: 1 warning
tests/test_tokenization_blenderbot.py: 1 warning
tests/test_tokenization_ctrl.py: 1 warning tests/test_tokenization_camembert.py:
2 warnings tests/test_tokenization_distilbert.py: 4 warnings
tests/test_tokenization_dpr.py: 8 warnings tests/test_tokenization_fsmt.py: 1
warning tests/test_tokenization_funnel.py: 2 warnings
tests/test_tokenization_herbert.py: 2 warnings tests/test_tokenization_gpt2.py:
1 warning tests/test_tokenization_layoutlm.py: 2 warnings
tests/test_tokenization_marian.py: 1 warning tests/test_tokenization_lxmert.py:
2 warnings tests/test_tokenization_mbart.py: 2 warnings
tests/test_tokenization_pegasus.py: 2 warnings
tests/test_tokenization_openai.py: 1 warning tests/test_tokenization_phobert.py:
1 warning tests/test_tokenization_deberta.py: 1 warning
tests/test_tokenization_prophetnet.py: 1 warning
tests/test_tokenization_reformer.py: 1 warning
tests/test_tokenization_squeezebert.py: 4 warnings
tests/test_tokenization_t5.py: 2 warnings tests/test_tokenization_roberta.py: 2
warnings tests/test_tokenization_transfo_xl.py: 1 warning
tests/test_tokenization_xlm.py: 1 warning
tests/test_tokenization_xlm_prophetnet.py: 1 warning
tests/test_tokenization_xlnet.py: 2 warnings
tests/test_tokenization_xlm_roberta.py: 2 warnings
src/transformers/tokenization_utils_base.py:2025:
FutureWarning: The `pad_to_max_length` argument is deprecated and will be
removed in a future version, use `padding=True` or `padding='longest'` to pad to
the longest sequence in the batch, or use `padding='max_length'` to pad to a max
length. In this case, you can give a specific length with `max_length` (e.g.
`max_length=45`) or leave max_length to None to pad to the maximal input size of
the model (e.g. 512 for Bert). warnings.warn(
tests/test_tokenization_t5.py::T5TokenizationTest::test_eos_in_input
tests/test_tokenization_t5.py::T5TokenizationTest::test_eos_treatment
src/transformers/tokenization_t5.py:183:
UserWarning: This sequence already has </s>. In future versions this behavior
may lead to duplicated eos tokens being added. warnings.warn(
tests/test_trainer.py: 44 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64:
UserWarning: Was asked to gather along dimension 0, but all input tensors were
scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked
to gather along dimension 0, but all '
tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training
tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/cuda/nccl.py:48:
DeprecationWarning: Using or importing the ABCs from 'collections' instead of
from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop
working if not isinstance(inputs, collections.Container) or isinstance(inputs,
torch.Tensor):
-- Docs: https://docs.pytest.org/en/stable/warnings.html
```
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8059/comments | https://api.github.com/repos/huggingface/transformers/issues/8059/events | https://github.com/huggingface/transformers/pull/8059 | 729,759,657 | MDExOlB1bGxSZXF1ZXN0NTEwMTg4NjE0 | 8,059 | infer entailment label id on zero shot pipeline | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wouldn't it be better to:\r\n- ask the model authors for the relevant models to update their config.json to include a `id2label`\r\n- and/or to modify them automatically for them on the hub?\r\n\r\nI feel this PR is adding a feature that works around an issue when we should be fixing the root issue (also fixes the displayed labels, other future features, etc.). Wdyt?",
"@julien-c Well, to be clear this PR ~~does~~ did two things:\r\n\r\n1. Switches from always using the last index to using the index determined by looking in the model config if present\r\n2. Gives the user the option to manually override the index in case the information isn't present in the config\r\n\r\nIf I understand correctly, your issue is just with (2). I think it's a fair point. I don't think we'll be able to ensure that [all NLI models](https://huggingface.co/models?search=nli) have a clearly defined label mapping though. But instead of an override arg, I think it might be better to just add a warning if the entailment label ID can't be found in the config.",
"@joeddav Ok yes 1/ is great.\r\n\r\n> I don't think we'll be able to ensure that [all NLI models](https://huggingface.co/models?search=nli) have a clearly defined label mapping though.\r\n\r\nWhy not?",
"@julien-c Just because there are almost 100 results for [\"NLI\"](https://huggingface.co/models?search=nli) on the model hub and I'd guess from a quick sampling that the majority don't have a label mapping defined. For each model we'd have to figure out which label is which, which would mean either getting the author to look it up and tell us or else running tests on the correct dataset to figure it ourselves.\r\n\r\nDo you think it'd be worthwhile to warn the user when uploading or creating configs with generic/missing label mappings (or with any other important fields missing) going forward? Defining a label2id seems like a rather obscure property that I would assume is purely cosmetic if I were uploading a model, i.e. I wouldn't expect it to actually impact code behavior for someone using my model."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Adds an optional argument to the zero shot pipeline constructor to specify the label id of the NLI model that corresponds to "entailment", which it needs to calculate each candidate label's score. Most models in the hub use the last last label id, but some differ (e.g. the recent [ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli](https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli)).
If the argument is not passed, the pipeline will attempt to look up the entailment dimension in the model config's id2label mapping. If the config does not specify the entailment dimension, the value will be set to `-1`, indicating the last dimension of the model output.
With this logic in place, the arg only needs to be passed when both (1) the model's entailment label id is not the last id and (2) when the model config's `label2id` doesn't specify the entailment id. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8059",
"html_url": "https://github.com/huggingface/transformers/pull/8059",
"diff_url": "https://github.com/huggingface/transformers/pull/8059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8059.patch",
"merged_at": 1603822196000
} |
https://api.github.com/repos/huggingface/transformers/issues/8058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8058/comments | https://api.github.com/repos/huggingface/transformers/issues/8058/events | https://github.com/huggingface/transformers/issues/8058 | 729,755,781 | MDU6SXNzdWU3Mjk3NTU3ODE= | 8,058 | [testing] port test_trainer_distributed to run with pytest | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ooops, closed it by mistake too early. But it has been resolved here:\r\nhttps://github.com/huggingface/transformers/pull/8107"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Extracting the request from https://github.com/huggingface/transformers/pull/7993#issuecomment-716508513 to this issue to make it easier to track.
Now that we have a framework to run distributed under `pytest` `test_trainer_distributed` needs to be ported there.
I will work on that.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8058/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8057/comments | https://api.github.com/repos/huggingface/transformers/issues/8057/events | https://github.com/huggingface/transformers/pull/8057 | 729,736,172 | MDExOlB1bGxSZXF1ZXN0NTEwMTY5NjM1 | 8,057 | [testing] fixing crash in deberta | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"p.s. apparently there has been a deprecation warning with pytorch-1.6, but since this project isn't quite paying attention to the warnings it lead to this crash with pytorch-1.7. It seems to be a weird situation where a deprecation warning hasn't been turned into an error and instead leading to a crash with this particular issue, but perhaps setting an intention to keep the warnings in check would save a possible hassle in the future. https://github.com/huggingface/transformers/issues/8060\r\n\r\nPerhaps what would help is to automatically turn selective types of warnings to errors, and thus not let those slide until they become a problem.\r\n\r\nAlso having a scheduled CI that runs occasionally on pytorch-nightly (and any release candidates) would give an early alert."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR fixes a crash in deberta tests w/ pytorch-1.7+:
```
RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda- \
bld/pytorch_1603436966316/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
```
All credits go to @gchanan, thank you! For details of why please see https://github.com/huggingface/transformers/issues/8022#issuecomment-716252599
Fixes: #8022 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8057/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8057",
"html_url": "https://github.com/huggingface/transformers/pull/8057",
"diff_url": "https://github.com/huggingface/transformers/pull/8057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8057.patch",
"merged_at": 1603732751000
} |
https://api.github.com/repos/huggingface/transformers/issues/8056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8056/comments | https://api.github.com/repos/huggingface/transformers/issues/8056/events | https://github.com/huggingface/transformers/pull/8056 | 729,729,286 | MDExOlB1bGxSZXF1ZXN0NTEwMTY0MDA5 | 8,056 | [TF] from_pt should respect authorized_unexpected_keys | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This comes in handy when
```
"model.encoder.embed_tokens.weight",
"model.decoder.embed_tokens.weight",
```
are in the PT state dict but not the TF symbolic weights.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8056/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8056",
"html_url": "https://github.com/huggingface/transformers/pull/8056",
"diff_url": "https://github.com/huggingface/transformers/pull/8056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8056.patch",
"merged_at": 1603734808000
} |
https://api.github.com/repos/huggingface/transformers/issues/8055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8055/comments | https://api.github.com/repos/huggingface/transformers/issues/8055/events | https://github.com/huggingface/transformers/issues/8055 | 729,722,332 | MDU6SXNzdWU3Mjk3MjIzMzI= | 8,055 | BertEncoder has no attribute 'bias' when convert tf checkpoint | {
"login": "heraclex12",
"id": 13283488,
"node_id": "MDQ6VXNlcjEzMjgzNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13283488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heraclex12",
"html_url": "https://github.com/heraclex12",
"followers_url": "https://api.github.com/users/heraclex12/followers",
"following_url": "https://api.github.com/users/heraclex12/following{/other_user}",
"gists_url": "https://api.github.com/users/heraclex12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heraclex12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heraclex12/subscriptions",
"organizations_url": "https://api.github.com/users/heraclex12/orgs",
"repos_url": "https://api.github.com/users/heraclex12/repos",
"events_url": "https://api.github.com/users/heraclex12/events{/privacy}",
"received_events_url": "https://api.github.com/users/heraclex12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | # ❓ Questions & Help
BertEncoder object has no attribute 'bias' when convert tf checkpoint
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I tried to convert my pretrained BERT tf checkpoint but I got this error:
```
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in load_tf_weights_in_bert(model, config, tf_checkpoint_path)
133 pointer = getattr(pointer, "weight")
134 elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
--> 135 pointer = getattr(pointer, "bias")
136 elif scope_names[0] == "output_weights":
137 pointer = getattr(pointer, "weight")
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
770 return modules[name]
771 raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
--> 772 type(self).__name__, name))
773
774 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
ModuleAttributeError: 'BertEncoder' object has no attribute 'bias'
```
I used both load_tf_weights_in_bert function and BertForPreTraining but it was not working.
My code:
```
config = BertConfig.from_json_file('bertvn_base/bertvn_base_config.json')
model = BertForPreTraining.from_pretrained('bertvn_base/model.ckpt', from_tf=True, config=config)
```
My config:
```
{"attention_probs_dropout_prob": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"embedding_size": 768,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 192,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_hidden_groups": 12,
"net_structure_type": 0,
"gap_size": 0,
"num_memory_blocks": 0,
"inner_group_num": 1,
"down_scale_factor": 1,
"type_vocab_size": 2,
"vocab_size": 120000
}
```
Thanks for your help!
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8055/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8054/comments | https://api.github.com/repos/huggingface/transformers/issues/8054/events | https://github.com/huggingface/transformers/issues/8054 | 729,706,558 | MDU6SXNzdWU3Mjk3MDY1NTg= | 8,054 | Add m2m 100 multilingual translation model from FAIR | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"If it helps, I managed to load the weights from M2M100 - 418M param model to Mbart\r\n```\r\nfrom transformers import MBartForConditionalGeneration, MBartConfig, AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom fairseq import checkpoint_utils, options, tasks, utils\r\nimport torch\r\n\r\nwith open('418M_last_checkpoint.pt', 'rb') as f:\r\n state = torch.load(f, map_location=torch.device(\"cpu\"))\r\nstate = checkpoint_utils._upgrade_state_dict(state)\r\nargs = state['args']\r\nargs.fixed_dictionary = \"model_dict.128k.txt\"\r\nargs.source_lang = 'en'\r\nargs.target_lang = 'hi'\r\n\r\nweights = state['model']\r\nkeys = [k for k in weights.keys()]\r\nfor key in keys:\r\n if key.startswith('encoder.') or key.startswith('decoder.'):\r\n new_key = 'model.' + key\r\n weights[new_key] = weights[key]\r\n del weights[key]\r\nweights['model.shared.weight'] = weights['model.encoder.embed_tokens.weight']\r\n\r\nconfig1 = MBartConfig(\r\n activation_function='relu',\r\n vocab_size=128112,\r\n encoder_layerdrop=0.05,\r\n decoder_layerdrop=0.05,\r\n attention_dropout=0.1,\r\n add_final_layer_norm=True,\r\n normalize_before=True,\r\n scale_embedding=True,\r\n static_position_embeddings=True,\r\n pad_token_id=1,\r\n bos_token_id=0,\r\n eos_token_id=2,\r\n normalize_embedding=True,\r\n use_cache=False\r\n)\r\nmbart1 = MBartForConditionalGeneration(config1)\r\nmbart1.load_state_dict(weights, strict=False)\r\n```\r\nThis is based on the checkpoint and dictionary provided [here](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100#418m-and-12b-model).\r\n\r\nI also had to replace the position embeddings in `modeling_bart` with the [code from fairseq](https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sinusoidal_positional_embedding.py), because the fairseq implementation of the embeddings seems to be different form the one present in `modeling_bart`.\r\n\r\nAlthough the weights load successfully it generates random tokens, albeit in the correct language. I have a feeling that there's something going on in fairseq's generate function that is not accounted for here, though I may be wrong.\r\n\r\nWould greatly appreciate any ideas you might have to debug the generation aspect.\r\n\r\nHope this helps! Thanks!",
"This issue has been stale for 1 month.",
"`M2M100` is now integrated! \r\n\r\ndoc: https://huggingface.co/transformers/master/model_doc/m2m_100.html\r\nmodels: https://huggingface.co/models?filter=m2m_100",
"> `M2M100` is now integrated!\r\n> \r\n> doc: https://huggingface.co/transformers/master/model_doc/m2m_100.html\r\n> models: https://huggingface.co/models?filter=m2m_100\r\n\r\nThere is a problem with loading the model `model = M2M100ForConditionalGeneration.from_pretrained('facebook/m2m100_418M')`\r\nproduces OSError: Unable to load weights from pytorch checkpoint file for 'facebook/m2m100_418M' at '/root/.cache/huggingface/transformers/f9eabc2ccf1b4ddafac5c7f6dc837130ab7122d75ee98a64ed0a446a20b84871.53192defd013a2942c1d27b5842eba64b84d0e49943b0892c8f71967bf053029'\r\n\r\nA manual download of pytorch_model.bin leads to a similar exception, as it produces a zip.",
"Hi @ciortanmadalina \r\n\r\nI just tried this and can load the model successfully. This seems to be the issue with the cache, can you delete the cache and try again?",
"> Hi @ciortanmadalina\r\n> \r\n> I just tried this and can load the model successfully. This seems to be the issue with the cache, can you delete the cache and try again?\r\n\r\nI soved it: the problem was not the cache but the pytorch version (1.4), which strangely enough, didn't raise a problem for the other transformer models I used (e.g. T5, Bert). Once I upgraded to 1.7, the issue was gone. Thanks for your answer!"
] | 1,603 | 1,616 | 1,615 | CONTRIBUTOR | null | Weights, code are available.
+ Fairseq Code: https://github.com/pytorch/fairseq/tree/master/examples/m2m_100?fbclid=IwAR304kICXsffdDMogK4MWf4D7Xeu_3Cbmgu8pBCU_jKcjijCuJfLK7CY9_I
+ Paper: https://arxiv.org/abs/2010.11125
+ This model will not run on 1 V100 GPU, so model parallelism will be needed.
+ I would expect the state dict to be very similar to mBART, but not sure yet.
+ All I've done is download the state dict, run their command, and asked for help https://github.com/pytorch/fairseq/issues/2772#issuecomment-716152453 when it broke.
Leaving this unassigned in case somebody else wants to take over.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8054/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8054/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8053/comments | https://api.github.com/repos/huggingface/transformers/issues/8053/events | https://github.com/huggingface/transformers/pull/8053 | 729,635,985 | MDExOlB1bGxSZXF1ZXN0NTEwMDg4MzY1 | 8,053 | Minor error fix of 'bart-large-cnn' details in the pretrained_models doc | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!",
"Thank you, too!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | I found that there seemed to be a mistake regarding "facebook/bart-large-cnn" in the pretrained_models doc.
# What does this PR do?
While I check the model explanations in the pretrained_models doc, I found that there seemed to be a mistake.
Regarding `facebook/bart-large-cnn`, the details of the model is as follows:
```
12-layer, 1024-hidden, 16-heads, 406M parameters (same as base)
bart-large base architecture finetuned on cnn summarization task
```
If my understanding is correct, it seems that `12-layer` and `(same as base)` should be `24-layer` and `(same as large)`.
I asked a question in the forum about this:
https://discuss.huggingface.co/t/there-seems-to-be-a-mistake-in-documentation-pretrained-models-html-regarding-bart/1746/
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
documentation: @sgugger
(Thank you for kindly answering my question in the forum!)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8053/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8053",
"html_url": "https://github.com/huggingface/transformers/pull/8053",
"diff_url": "https://github.com/huggingface/transformers/pull/8053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8053.patch",
"merged_at": 1603724717000
} |
https://api.github.com/repos/huggingface/transformers/issues/8052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8052/comments | https://api.github.com/repos/huggingface/transformers/issues/8052/events | https://github.com/huggingface/transformers/pull/8052 | 729,629,218 | MDExOlB1bGxSZXF1ZXN0NTEwMDgyODcw | 8,052 | Fix a bug for `CallbackHandler.callback_list` | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do I need to add a test in `tests/test_trainer_callback.py` verifying that instantiating a trainer with duplicated callbacks doesn't fail?\r\n\r\n```python\r\n# this should not fail\r\ntrainer = self.get_trainer(\r\n callbacks=[MyTestTrainerCallback, MyTestTrainerCallback],\r\n)\r\n```",
"@sgugger Thanks for the approval. I just added a test that verifies the following:\r\n\r\n1. `Trainer` can be instantiated with duplicated callacks.\r\n2. A warning is emitted for duplicated callbacks.\r\n"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix a bug where `CallbackHandler.callback_list` fails when given callbacks are duplicated:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-40-9605b122f4d1> in <module>()
2 from transformers.trainer import DEFAULT_CALLBACKS
3
----> 4 CallbackHandler(DEFAULT_CALLBACKS + [MLflowCallback], "model", "optimizer", "lr_scheduler")
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py in __init__(self, callbacks, model, optimizer, lr_scheduler)
277 self.callbacks = []
278 for cb in callbacks:
--> 279 self.add_callback(cb)
280 self.model = model
281 self.optimizer = optimizer
/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py in add_callback(self, callback)
299 f"You are adding a {cb_class} to the callbacks of this Trainer, but there is already one. The current"
300 + "list of callbacks is\n:"
--> 301 + self.callback_list
302 )
303 self.callbacks.append(cb)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py in callback_list(self)
326 @property
327 def callback_list(self):
--> 328 return "\n".join(self.callbacks)
329
330 def on_init_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl):
TypeError: sequence item 0: expected str instance, DefaultFlowCallback found
```
Code to reproduce the bug:
```python
from transformers.trainer_callback import CallbackHandler
from transformers.trainer import DEFAULT_CALLBACKS
CallbackHandler(DEFAULT_CALLBACKS + [MLflowCallback], "model", "optimizer", "lr_scheduler")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8052/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8052",
"html_url": "https://github.com/huggingface/transformers/pull/8052",
"diff_url": "https://github.com/huggingface/transformers/pull/8052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8052.patch",
"merged_at": 1603809425000
} |
https://api.github.com/repos/huggingface/transformers/issues/8051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8051/comments | https://api.github.com/repos/huggingface/transformers/issues/8051/events | https://github.com/huggingface/transformers/pull/8051 | 729,591,678 | MDExOlB1bGxSZXF1ZXN0NTEwMDUyMzYy | 8,051 | minor model card description updates | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Makes a few minor updates to the [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) model card, such as removing the reference to the deprecated zero shot demo when the user can play with zero shot classification via the embedded widget. Also links to distilled bart models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8051/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8051",
"html_url": "https://github.com/huggingface/transformers/pull/8051",
"diff_url": "https://github.com/huggingface/transformers/pull/8051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8051.patch",
"merged_at": 1603721061000
} |
https://api.github.com/repos/huggingface/transformers/issues/8050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8050/comments | https://api.github.com/repos/huggingface/transformers/issues/8050/events | https://github.com/huggingface/transformers/pull/8050 | 729,572,886 | MDExOlB1bGxSZXF1ZXN0NTEwMDM3MjU1 | 8,050 | invalid argument wwm passed to the run_language_modeling.py file | {
"login": "MohammadrezaBanaei",
"id": 43634296,
"node_id": "MDQ6VXNlcjQzNjM0Mjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/43634296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohammadrezaBanaei",
"html_url": "https://github.com/MohammadrezaBanaei",
"followers_url": "https://api.github.com/users/MohammadrezaBanaei/followers",
"following_url": "https://api.github.com/users/MohammadrezaBanaei/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammadrezaBanaei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohammadrezaBanaei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammadrezaBanaei/subscriptions",
"organizations_url": "https://api.github.com/users/MohammadrezaBanaei/orgs",
"repos_url": "https://api.github.com/users/MohammadrezaBanaei/repos",
"events_url": "https://api.github.com/users/MohammadrezaBanaei/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohammadrezaBanaei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null |
# What does this PR do?
--wwm cant be used as an argument given run_language_modeling.py and should be changed to --whole_word_mask
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
@stefan-it | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8050/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8050",
"html_url": "https://github.com/huggingface/transformers/pull/8050",
"diff_url": "https://github.com/huggingface/transformers/pull/8050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8050.patch",
"merged_at": 1603728019000
} |
https://api.github.com/repos/huggingface/transformers/issues/8049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8049/comments | https://api.github.com/repos/huggingface/transformers/issues/8049/events | https://github.com/huggingface/transformers/pull/8049 | 729,557,613 | MDExOlB1bGxSZXF1ZXN0NTEwMDI0NzUz | 8,049 | Fix + Test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Fix an edge case of the blenderbot-90 tokenizer.
Closes #8029
# Context
If the blenderbot-90 tokenizer is used to tokenize the following sequence:
```py
sequence = "Ok ."
```
It will split it in two tokens at first:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L221
Those two tokens will be `['Ok', '.']`
The issue is that, when passed the second token, the `bpe` method will convert it from `'.'` to `' .'` here:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L160
This then gets split on spaces here:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L166
This is where the issue lies, as it creates two strings: `["", "."]`, the first one being empty.
It then crashes a bit further as we try to index the empty string:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L171
## Proposal
Ensure that the token has a length > 0 before trying to manage it, otherwise ignore that token.
Added a test.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8049/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8049",
"html_url": "https://github.com/huggingface/transformers/pull/8049",
"diff_url": "https://github.com/huggingface/transformers/pull/8049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8049.patch",
"merged_at": 1603729948000
} |
https://api.github.com/repos/huggingface/transformers/issues/8048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8048/comments | https://api.github.com/repos/huggingface/transformers/issues/8048/events | https://github.com/huggingface/transformers/pull/8048 | 729,552,304 | MDExOlB1bGxSZXF1ZXN0NTEwMDIwNDQx | 8,048 | Fix label name in DataCollatorForNextSentencePrediction test | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
Labels have been renamed in `DataCollatorForNextSentencePrediction` to go with the fact `masked_lm_labels` is a deprecated argument, but the corresponding test was not adjusted accordingly, this PR fixes that.
Fixes #8034 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8048/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8048",
"html_url": "https://github.com/huggingface/transformers/pull/8048",
"diff_url": "https://github.com/huggingface/transformers/pull/8048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8048.patch",
"merged_at": 1603718593000
} |
https://api.github.com/repos/huggingface/transformers/issues/8047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8047/comments | https://api.github.com/repos/huggingface/transformers/issues/8047/events | https://github.com/huggingface/transformers/issues/8047 | 729,515,951 | MDU6SXNzdWU3Mjk1MTU5NTE= | 8,047 | [T5] Unused `n_positions` and `max_position_embeddings`. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,603 | 1,605 | 1,605 | MEMBER | null | The T5Config has the parameter `n_positions` set to 512 and `max_position_embeddigs` referring to `n_positions`. However, neither `max_position_embeddigs` nor `n_positions` is used in the `T5Model` and T5 is not limited to `max_position_embeddings`. *E.g.*:
```python
from transformers import T5Model
model = T5Model.from_pretrained("t5-small")
model.config.max_position_embeddings # shows 512
input_ids = torch.tensor([600 * [0]]) # input of size > 512
model(input_ids, decoder_input_ids=input_ids) # works fine
```
I think we should delete the parameter.
@thomwolf - do you remember why we added `max_position_embeddigs` and `n_positions` to T5? The model does not seem to use these params and also should not be limited to 512 due to its relative position embeddings. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8046/comments | https://api.github.com/repos/huggingface/transformers/issues/8046/events | https://github.com/huggingface/transformers/pull/8046 | 729,463,367 | MDExOlB1bGxSZXF1ZXN0NTA5OTQ3MTMy | 8,046 | Minor typo fixes to the preprocessing tutorial in the docs | {
"login": "albanie",
"id": 4395064,
"node_id": "MDQ6VXNlcjQzOTUwNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4395064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albanie",
"html_url": "https://github.com/albanie",
"followers_url": "https://api.github.com/users/albanie/followers",
"following_url": "https://api.github.com/users/albanie/following{/other_user}",
"gists_url": "https://api.github.com/users/albanie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albanie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albanie/subscriptions",
"organizations_url": "https://api.github.com/users/albanie/orgs",
"repos_url": "https://api.github.com/users/albanie/repos",
"events_url": "https://api.github.com/users/albanie/events{/privacy}",
"received_events_url": "https://api.github.com/users/albanie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Minor typo fixes to the preprocessing tutorial in the docs
# What does this PR do?
Minor typo fixes to the tokenizer summary in the docs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8046/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8046",
"html_url": "https://github.com/huggingface/transformers/pull/8046",
"diff_url": "https://github.com/huggingface/transformers/pull/8046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8046.patch",
"merged_at": 1603722150000
} |
https://api.github.com/repos/huggingface/transformers/issues/8045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8045/comments | https://api.github.com/repos/huggingface/transformers/issues/8045/events | https://github.com/huggingface/transformers/pull/8045 | 729,462,986 | MDExOlB1bGxSZXF1ZXN0NTA5OTQ2ODIy | 8,045 | Minor typo fixes to the tokenizer summary in the docs | {
"login": "albanie",
"id": 4395064,
"node_id": "MDQ6VXNlcjQzOTUwNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4395064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albanie",
"html_url": "https://github.com/albanie",
"followers_url": "https://api.github.com/users/albanie/followers",
"following_url": "https://api.github.com/users/albanie/following{/other_user}",
"gists_url": "https://api.github.com/users/albanie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albanie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albanie/subscriptions",
"organizations_url": "https://api.github.com/users/albanie/orgs",
"repos_url": "https://api.github.com/users/albanie/repos",
"events_url": "https://api.github.com/users/albanie/events{/privacy}",
"received_events_url": "https://api.github.com/users/albanie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Minor typo fixes to the tokenizer summary in the docs
# What does this PR do?
Minor typo fixes to the tokenizer summary in the docs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8045/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8045",
"html_url": "https://github.com/huggingface/transformers/pull/8045",
"diff_url": "https://github.com/huggingface/transformers/pull/8045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8045.patch",
"merged_at": 1603714114000
} |
https://api.github.com/repos/huggingface/transformers/issues/8044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8044/comments | https://api.github.com/repos/huggingface/transformers/issues/8044/events | https://github.com/huggingface/transformers/pull/8044 | 729,459,763 | MDExOlB1bGxSZXF1ZXN0NTA5OTQ0MTI0 | 8,044 | Automatically cuts input if the pipeline position_ids can't handle it. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just discovered that roberta does not respect max_embeddings_postion (it adds `padding_idx` to it) so the current fix actually does not really fix it for roberta. If t5 is another special case I fear the current approach is not adapted.\r\n\r\n",
"> I just discovered that roberta does not respect max_embeddings_postion (it adds `padding_idx` to it) so the current fix actually does not really fix it for roberta. If t5 is another special case I fear the current approach is not adapted.\r\n\r\nI think Roberta does respect `max_embeddings_positions` - why do you think it does not? Roberta is just ugly because `max_position_ids=514 != 512`, but your approach here should work for Roberta no? ",
"You can check\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(task='sentiment-analysis', model='roberta-base-openai-detector')\r\npipe(\"Some.....very long text\")\r\n```\r\nAnd it will fail. The reason is:\r\n\r\n```python\r\ndef create_position_ids_from_input_ids(input_ids, padding_idx): \r\n \"\"\"Replace non-padding symbols with their position numbers. Position numbers begin at\r\n padding_idx+1. Padding symbols are ignored. This is modified from fairseq's\r\n `utils.make_positions`. \r\n \r\n :param torch.Tensor x: \r\n :return torch.Tensor: \r\n \"\"\" \r\n # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.\r\n mask = input_ids.ne(padding_idx).int() \r\n incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask \r\n return incremental_indices.long() + padding_idx \r\n```\r\nAs it mentions here, position_embeddings start at `padding_idx+1`. So 2, and finish at 514 (max_embeddings_position), but if we send a sequence of length `514` then the final position_embedding will be `516` (514 + 2) and so out of the embeddings available.\r\n\r\nI'll look at T5 to look for a better approach",
"> ```python\r\n> create_position_ids_from_input_ids\r\n> ```\r\n\r\nI see! \r\n\r\nBut this also actually looks like a bug in Roberta then. This function does not make much sense in combination with `max_position_embeddings=514`... I think Roberta's `max_position_embeddings` should actually be changed to `512`. @LysandreJik - can you take a look at this as well maybe? \r\n\r\nRegarding T5, as written in the issue I think we should delete `max_position_embeddings`.",
"I think `T5` is ok, as it seems to use `n_positions` not `max_embedding_positions`, no ?\r\n\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/t5-large-config.json",
"> I think `T5` is ok, as it seems to use `n_positions` not `max_embedding_positions`, no ?\r\n> \r\n> https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json\r\n> https://s3.amazonaws.com/models.huggingface.co/bert/t5-large-config.json\r\n\r\nThe config defines `max_position_embeddings` as `n_positions` so it does have a `max_position_embeddings` param",
"Okay I changed the logic.\r\n\r\nInstead of using `truncation='longest_first'`, I am not using it, but I am following the `deprecation_warning` flag to actually be able to re-emit a warning, otherwise it is silent which is a bit damageable I'd say.\r\n\r\nThere is still an issue, where the small model used in the pipeline tests `\"sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english\"` actually does not define properly a `tokenizer.model_max_length`, but I think it's more a configuration issue than a code issue.\r\n\r\nWhat do you think? ",
"It's a good idea to be able to re-emit the warnings, however, this dictionary exists so that these warnings are not re-emitted down the line. The point of this is that users only see this warning once and it doesn't flood their stdout.\r\n\r\nThis means that\r\n```py\r\n>>> from transformers import RobertaTokenizer\r\n>>> tokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n>>> tokenizer.encode(\"Hey how are you\" * 1000)\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (4002 > 512). Running this sequence through the model will result in indexing errors\r\n[0, ...]\r\n>>> tokenizer.deprecation_warnings\r\n{'sequence-length-is-longer-than-the-specified-maximum': True}\r\n>>> tokenizer.encode(\"Hey how are you\" * 10)\r\n[0, ...]\r\n>>> tokenizer.deprecation_warnings\r\n{'sequence-length-is-longer-than-the-specified-maximum': True}\r\n```\r\nChecking against it means you will always re-emit the warning:\r\n```py\r\n>>> from transformers import pipeline\r\n... \r\n... pipe = pipeline(task='sentiment-analysis', model='roberta-base-openai-detector')\r\n... pipe(\"Some.....very long text\" * 1000)\r\n[...]\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (5002 > 512). Running this sequence through the model will result in indexing errors\r\n/home/jik/Workspaces/Python/transformers/src/transformers/pipelines.py:697: PipelineWarning: You input length was too long (5002 > 512) for this model and was truncated.\r\n PipelineWarning,\r\n>>> pipe(\"Some.....very long text\")\r\n/home/jik/Workspaces/Python/transformers/src/transformers/pipelines.py:697: PipelineWarning: You input length was too long (7 > 512) for this model and was truncated.\r\n PipelineWarning,\r\n[{'label': 'LABEL_0', 'score': 0.8550598621368408}]\r\n```\r\nAlso while you're searching for better solutions, if you can use the `truncation` parameter that would be awesome. If you can't, no big deal, but I'd rather we use our own API if possible.",
"I am agree with the idea to not spam, the problem is that the current way of doing it is \r\n\r\n```python\r\n logger.warning( \r\n \"Token indices sequence length is longer than the specified maximum sequence length \"\r\n \"for this model ({} > {}). Running this sequence through the model will result in \"\r\n \"indexing errors\".format(len(encoded_inputs[\"input_ids\"]), self.model_max_length)\r\n ) \r\n```\r\n\r\nnot\r\n\r\n```python\r\nwarnings.warn(\".....\", UserWarning)\r\n```\r\n\r\n(which btw can be triggered once by setting a warning filter but it's not the purpose here).\r\n\r\nThe more I think about this issue, the more I think the current behavior might be the correct one for transformers.\r\n- **Silent truncation is worse than raising an exception (ignoring what could be most of the input, is super dangerous for users, as it might output very wrong results).**\r\n- Adding checks (at the model level) within the forward pass is too costly.\r\n- `Re-emitting` a warning is bad if it's not trivial enough to do (don't want too much logic to handle those). Here we would need to capture (logger) before re-emitting (could be another logger or warnings, but it's still a bit much of work) \r\n- At the pipeline level, we don't want to dive too deeply into model internals (like model_max_length and other configs), I feel at most config that is shared by *all* or *almost all* models (like vocab_size)\r\n\r\nI'm in favor of closing this PR in the end.",
"Yes, the reason we use `logging` instead of `warnings` is to make use of the centralized logging system.\r\nI agree with all your points, and I believe this is something that could be patched by allowing the users to pass kwargs to the tokenizer/model, as is something that should be enabled in pipelines v2. \r\n\r\nClosing then, thank you for experimenting!"
] | 1,603 | 1,651 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently the failure is `index out of range in self`. (it fails in pytorch position_embeddings call).
The current PR aims to simply cut the input to the pipeline if the pipeline can't handle it with a warning to the user.
It feels better than the bad error message. As for triggering an error, it seems correct for the underlying model (it can't handle
the input). But, it seems a bit off to trigger the error for the pipeline as there is no way of knowing how to trim the input
for the user as he does not inputs tokens himself. Automatically cutting with a warning seems a bit better from
a usage standpoint.
This PR also improve a docstring which had a typo.
It also adds a new `PipelineWarning` because there are now 2 instances of such warnings and there should be more in the
future so enabling users to catch those warnings seems like a good idea.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
@lhoestq
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8044/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8044",
"html_url": "https://github.com/huggingface/transformers/pull/8044",
"diff_url": "https://github.com/huggingface/transformers/pull/8044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8044.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8043/comments | https://api.github.com/repos/huggingface/transformers/issues/8043/events | https://github.com/huggingface/transformers/pull/8043 | 729,455,607 | MDExOlB1bGxSZXF1ZXN0NTA5OTQwNzM5 | 8,043 | [Seq2Seq Trainer] Make sure padding is implemented for models without pad_token | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM!",
"@patrickvonplaten , @sshleifer \r\nI am seeing a major slowdown on TPU V3-8, \r\nlast time (9e68d075a4100906509170498480823e7e61874a) `sshleifer/student_marian_en_ro_6_3` finished 1 epoch in ~6 mins,\r\nnow on this branch it's showing ~1hr 20 mins",
"> @patrickvonplaten , @sshleifer\r\n> I am seeing a major slowdown on TPU V3-8,\r\n> last time ([9e68d07](https://github.com/huggingface/transformers/commit/9e68d075a4100906509170498480823e7e61874a)) `sshleifer/student_marian_en_ro_6_3` finished 1 epoch in ~6 mins,\r\n> now on this branch it's showing ~1hr 20 mins\r\n\r\nOhoh :-/ can you narrow down the commit that caused the slow-down? I took a look again at https://github.com/huggingface/transformers/pull/7809/files and this line I added could be problematic `inputs = copy.deepcopy(inputs)`. ",
"> > @patrickvonplaten , @sshleifer\r\n> > I am seeing a major slowdown on TPU V3-8,\r\n> > last time ([9e68d07](https://github.com/huggingface/transformers/commit/9e68d075a4100906509170498480823e7e61874a)) `sshleifer/student_marian_en_ro_6_3` finished 1 epoch in ~6 mins,\r\n> > now on this branch it's showing ~1hr 20 mins\r\n> \r\n> Ohh, can you narrow down the commit that caused the slow-down? I took a look again at https://github.com/huggingface/transformers/pull/7809/files and this line I added could be problematic `inputs = copy.deepcopy(inputs)`.\r\n\r\nYeah this line is actually called at every step -> can you check whether removing the `copy` operation speeds the seq2seq trainer up again? I've been a bit sloppy there I think :-/ ",
"It's still very slow even after removing that line. I'll try to find the exact commit which is causing this slowdown."
] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds padding for models without padding token as well. The logic is the following:
1) If model predicts `targets` < `max_length` => model has to have at least an `eos_token_id`. If model has no `config.pad_token_id` defined than the model simply uses the `config.eos_token_id` for padding.
2) If the model has no `config.eos_token_id`, => model cannot generate predictions shorter than `max_length`. In this case padding will never happen.
@sshleifer @patil-suraj - you guys were right -> the `Trainer` requires padding in any case (also if model has no padding token).
Could you guys review this PR and see if these fixes in Seq2Seq Trainer are ok for you?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8043/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8043",
"html_url": "https://github.com/huggingface/transformers/pull/8043",
"diff_url": "https://github.com/huggingface/transformers/pull/8043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8043.patch",
"merged_at": 1603729696000
} |
https://api.github.com/repos/huggingface/transformers/issues/8042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8042/comments | https://api.github.com/repos/huggingface/transformers/issues/8042/events | https://github.com/huggingface/transformers/pull/8042 | 729,433,461 | MDExOlB1bGxSZXF1ZXN0NTA5OTIyNDcy | 8,042 | Tentative improvement on sequence_length error for position_ids | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm going to close this PR. I started it but ultimately feel it's not worth it.\r\nThe overhead is non zero for a simple error message, that won't bring that much value (most users can debug anyway, users that can't won't necessarily do a better job with the improved error message)"
] | 1,603 | 1,651 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently the failure is `index out of range in self`. (it fails in pytorch position_embeddings call).
The current PR aims to simply improve the error message by saying that the position_ids are simply too large.
It does add an `if` statement and a `max` call in a forward pass which is not so great
but it might help other users understand better the failure mode.
If this PR is desirable (which I am not sure it is because it has a non zero overhead), we should probably look at all
position_ids and copy the behavior at some point.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
@lhoestq
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8042",
"html_url": "https://github.com/huggingface/transformers/pull/8042",
"diff_url": "https://github.com/huggingface/transformers/pull/8042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8042.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8041/comments | https://api.github.com/repos/huggingface/transformers/issues/8041/events | https://github.com/huggingface/transformers/pull/8041 | 729,402,917 | MDExOlB1bGxSZXF1ZXN0NTA5ODk3ODA4 | 8,041 | Create model cards for guwenbert | {
"login": "Ethan-yt",
"id": 9592150,
"node_id": "MDQ6VXNlcjk1OTIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9592150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ethan-yt",
"html_url": "https://github.com/Ethan-yt",
"followers_url": "https://api.github.com/users/Ethan-yt/followers",
"following_url": "https://api.github.com/users/Ethan-yt/following{/other_user}",
"gists_url": "https://api.github.com/users/Ethan-yt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ethan-yt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ethan-yt/subscriptions",
"organizations_url": "https://api.github.com/users/Ethan-yt/orgs",
"repos_url": "https://api.github.com/users/Ethan-yt/repos",
"events_url": "https://api.github.com/users/Ethan-yt/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ethan-yt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Cool, thanks for sharing!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Add two model_cards: ethanyt/guwenbert-base and ethanyt/guwenbert-large | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8041/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8041",
"html_url": "https://github.com/huggingface/transformers/pull/8041",
"diff_url": "https://github.com/huggingface/transformers/pull/8041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8041.patch",
"merged_at": 1603974114000
} |
https://api.github.com/repos/huggingface/transformers/issues/8040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8040/comments | https://api.github.com/repos/huggingface/transformers/issues/8040/events | https://github.com/huggingface/transformers/issues/8040 | 729,381,189 | MDU6SXNzdWU3MjkzODExODk= | 8,040 | Converting Transformers model to Tensorflow model | {
"login": "redrussianarmy",
"id": 24498747,
"node_id": "MDQ6VXNlcjI0NDk4NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/24498747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redrussianarmy",
"html_url": "https://github.com/redrussianarmy",
"followers_url": "https://api.github.com/users/redrussianarmy/followers",
"following_url": "https://api.github.com/users/redrussianarmy/following{/other_user}",
"gists_url": "https://api.github.com/users/redrussianarmy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/redrussianarmy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/redrussianarmy/subscriptions",
"organizations_url": "https://api.github.com/users/redrussianarmy/orgs",
"repos_url": "https://api.github.com/users/redrussianarmy/repos",
"events_url": "https://api.github.com/users/redrussianarmy/events{/privacy}",
"received_events_url": "https://api.github.com/users/redrussianarmy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't believe we have a way of converting keras models from .h5 to .pb. What is your use case, so that we can see if we may help further?",
"I want to improve the inference performance. I fine-tuned the pretrained model (`dbmdz/bert-base-turkish-uncased`). TensorRT can be helpful to reduce inference time. Use case is converting Keras model to TensorRT model. How can I achieve this?\r\n\r\nP.S.: I am open to any advice. Tensorflow Lite or other things can be used. I just need some guidance.",
"@mfuntowicz I believe you have experience with using TensorRT with the `transformers` models. Do you have any idea of how to enable this?",
"You can try a couple of ways : \r\n1. Keras -> Onnx -> TensorRT : You can use keras2onnx . You might have some operations that are not directly supported in onnx so you'll have to remove/edit those operations to convertible terms. Converting onnx to TRT can be done with a tool called \"trtexec\" or there are readymade scripts to do that. Do check model graph and accuracy at every step.\r\n2. Keras -> Frozen graph(.pb) -> onnx -> TensorRT\r\n\r\nYou can also use TF-TRT which again optimizes but less optimized than TensorRT. \r\n\r\nThere is another way you can explore Keras -> UFF -> TensorRT . ",
"Thank you for leading @zerocool95 ",
" i also face this problem, and i find a exits onnx file on onnx github, and it said it was transfered from huggingface, but when i use trtexec to covert onnx ---> trt . it say it don;'t support some ops like 'NoZero'"
] | 1,603 | 1,631 | 1,604 | NONE | null | I am trying to convert `dbmdz/bert-base-turkish-uncased` to tensorflow model (`.pb`). It contains `tf_model.h5` file.
I tried to convert from `tf_model.h5` to tensorflow model. However, I couldn't handle it. Is `tf_model.h5` file keras model, isn’t it?
Is there any instruction to convert huggingface non-tf models to tensorflow model(`.pb`)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8040/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8039/comments | https://api.github.com/repos/huggingface/transformers/issues/8039/events | https://github.com/huggingface/transformers/issues/8039 | 729,286,213 | MDU6SXNzdWU3MjkyODYyMTM= | 8,039 | Why the functions "add_special_tokens()" and "resize_token_embeddings()" hurt the performance of 'gpt2' and 'gpt2-medium' but not 'gpt2-large' and 'gpt2-xl' ? | {
"login": "MingfengXue",
"id": 61129327,
"node_id": "MDQ6VXNlcjYxMTI5MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/61129327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MingfengXue",
"html_url": "https://github.com/MingfengXue",
"followers_url": "https://api.github.com/users/MingfengXue/followers",
"following_url": "https://api.github.com/users/MingfengXue/following{/other_user}",
"gists_url": "https://api.github.com/users/MingfengXue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MingfengXue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MingfengXue/subscriptions",
"organizations_url": "https://api.github.com/users/MingfengXue/orgs",
"repos_url": "https://api.github.com/users/MingfengXue/repos",
"events_url": "https://api.github.com/users/MingfengXue/events{/privacy}",
"received_events_url": "https://api.github.com/users/MingfengXue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This behavior is caused by the fact that GPT2 is a model that ties the weights of the token embeddings and the language model head. For some settings of the pretrained weights, a new randomly initialized row of the language model head weight matrix might effectively unilaterally assign very high likelihood to the new word. This is what's happening for some pretrained models (but not others). A workaround is, after resizing the embeddings, set the new token embeddings to be the original unk token embedding, for example:\r\n\r\n model.transformer.wte.weight[-1] = model.transformer.wte.weight[-2]\r\n\r\nAlternatively, if you want to break the symmetry created by initializing with the exact same embedding, you can set the new embedding to be the average of all other embeddings, or add some noise when copying the existing unk token embedding. But just naively copying like above fixes the problem for GPT-2 (small).",
"@eric-mitchell You are right. Now I know how to solve my problem. Thanks for that.",
"Exactly same code with @eric-mitchell 's solution gives me error like:\r\n`model.transformer.wte.weight[-1] = model.transformer.wte.weight[-2]`\r\n\r\n> \"RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.\" \r\n\r\nAs far as i know, in-place operation is the operation that changes directly the content. But here I can't find any in-place operation. Does anyone know why this happens?",
"@jucho2725 I've just found a possible solution [here](https://stackoverflow.com/questions/49161652/how-to-get-around-in-place-operation-error-if-index-leaf-variable-for-gradient-u) ",
"@Loreb92 Thanks for the help. How I solve the issue is the same as there. \r\nHere is a code that I use. Hope this helps someone.\r\n\r\n```\r\ntokenizer = GPT2Tokenizer.from_pretrained(model_args.model_name_or_path)\r\ntokenizer.pad_token = tokenizer.eos_token # gpt2 does not have pad token at first.\r\n\r\nspecial_tokens_dict = {\r\n \"additional_special_tokens\": ['[ABC]', '[DEF]', '[GHI]'],\r\n}\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nunk_tok_emb = model.transformer.wte.weight.data[tokenizer.unk_token_id, :] \r\nfor i in range(num_added_toks):\r\n model.transformer.wte.weight.data[-(i+1), :] = unk_tok_emb\r\n```\r\n"
] | 1,603 | 1,623 | 1,609 | NONE | null | # ❓ Questions & Help
## Details
When I use add_special_tokens and resize_token_embeddings to expand the vocabulary, the LM loss would become very large in gpt2 and gpt2-medium models (loaded by from_pretrained('gpt2') and from_pretrained('gpt2-medium)). But it don't happen when I load the gpt2-large and gpt2-xl models (also loaded by from_pretrained). Why?
Environment Info:
python 3.7.7
Linux 16.04
transformers 3.3.1
pytorch 1.6.0
Codes and results:
'''
import torch
from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel
device = torch.device('cuda:3')
input_sentence = 'who win this game?'
gpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
gpt2model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True)
gpt2model.to(device)
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(5.0102, device='cuda:3', grad_fn=<NllLossBackward>)
gpt2tokenizer.add_special_tokens({'additional_special_tokens': ['[first]', '[second]']})
gpt2model.resize_token_embeddings(len(gpt2tokenizer))
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(77.1513, device='cuda:3', grad_fn=<NllLossBackward>)
gpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
gpt2model = GPT2LMHeadModel.from_pretrained('gpt2-large', return_dict=True)
gpt2model.to(device)
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(5.1567, device='cuda:3', grad_fn=<NllLossBackward>)
gpt2tokenizer.add_special_tokens({'additional_special_tokens': ['[first]', '[second]']})
gpt2model.resize_token_embeddings(len(gpt2tokenizer))
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(5.1568, device='cuda:3', grad_fn=<NllLossBackward>)
''' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8039/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8039/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8038/comments | https://api.github.com/repos/huggingface/transformers/issues/8038/events | https://github.com/huggingface/transformers/pull/8038 | 729,157,534 | MDExOlB1bGxSZXF1ZXN0NTA5Njk1MzUw | 8,038 | Model Card for Gujarati-XLM-R-Base | {
"login": "ashwanitanwar",
"id": 26187617,
"node_id": "MDQ6VXNlcjI2MTg3NjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/26187617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashwanitanwar",
"html_url": "https://github.com/ashwanitanwar",
"followers_url": "https://api.github.com/users/ashwanitanwar/followers",
"following_url": "https://api.github.com/users/ashwanitanwar/following{/other_user}",
"gists_url": "https://api.github.com/users/ashwanitanwar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashwanitanwar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashwanitanwar/subscriptions",
"organizations_url": "https://api.github.com/users/ashwanitanwar/orgs",
"repos_url": "https://api.github.com/users/ashwanitanwar/repos",
"events_url": "https://api.github.com/users/ashwanitanwar/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashwanitanwar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR adds the model card for the Gujarati-XLM-R-Base. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8038/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8038",
"html_url": "https://github.com/huggingface/transformers/pull/8038",
"diff_url": "https://github.com/huggingface/transformers/pull/8038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8038.patch",
"merged_at": 1603974072000
} |
https://api.github.com/repos/huggingface/transformers/issues/8037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8037/comments | https://api.github.com/repos/huggingface/transformers/issues/8037/events | https://github.com/huggingface/transformers/issues/8037 | 729,136,898 | MDU6SXNzdWU3MjkxMzY4OTg= | 8,037 | RAG: Do we need to pretrained the doc-encoder when using a custom dataset? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @shamanez,\r\n\r\nFrom the paper: https://arxiv.org/pdf/2005.11401.pdf (check part 2.2 - 2.5) it seems like the doc-encoder was never explicitely pre-trained for RAG, but the authors used a pre-trained retriever that \"was trained to retrieve documents which\r\ncontain answers to TriviaQA [20] questions and Natural Questions [25].\" => So you would have to see for yourself if the document encoder is sutiable for your task or not. If it is not suitable you will have to pre-train your own doc encoder and built the index using this document encoder. \r\n\r\nIt's a good question though we should probably also put it on the discussion forum: http://discuss.huggingface.co/.\r\n\r\nAlso pinging @lhoestq as he can probably provide you with a better answer than I can.\r\n",
"+1 for Patrick, and I confirm that RAG uses the pretrained doc encoder from DPR.\r\nIt would be very interesting to see if the doc encoder handles well documents from specific domains.\r\nLet us know if you plan to test that :)\r\n\r\nIf it doesn't work for your case you will probably need to train DPR on your dataset for retrieval before using RAG",
"@patrickvonplaten @lhoestq \r\n\r\nOk, I will give it a try and do a comparison for my task. Just need to clarify my pre-training pipeline for DPR in case I need to pre-train the doc encoder.\r\n\r\n1. Pre-train the DPR using the FacebookAI [repository](https://github.com/facebookresearch/DPR).\r\n2. Use the custom checkpoint and load it [here](https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L105) (Is there any conversion I need to do before this step)",
"@lhoestq \r\n\r\nSorry for spamming. Can you please let me know, whether I can directly use different DPR checkpoints that trained with Facebook repo? ",
"> @patrickvonplaten @lhoestq\r\n> \r\n> Ok, I will give it a try and do a comparison for my task. Just need to clarify my pre-training pipeline for DPR in case I need to pre-train the doc encoder.\r\n> \r\n> 1. Pre-train the DPR using the FacebookAI [repository](https://github.com/facebookresearch/DPR).\r\n> 2. Use the custom checkpoint and load it [here](https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L105) (Is there any conversion I need to do before this step)\r\n\r\nYes that's it. \r\nTo convert the DPR checkpoint from the original repo to transformers you can use the script `src/transformers/convert_dpr_original_checkpoint_to_pytorch.py`\r\n\r\n",
"Perfect. Thanks for the clarification :).\n\nOn Thu, Oct 29, 2020, 22:32 Quentin Lhoest <[email protected]> wrote:\n\n> @patrickvonplaten <https://github.com/patrickvonplaten> @lhoestq\n> <https://github.com/lhoestq>\n>\n> Ok, I will give it a try and do a comparison for my task. Just need to\n> clarify my pre-training pipeline for DPR in case I need to pre-train the\n> doc encoder.\n>\n> 1. Pre-train the DPR using the FacebookAI repository\n> <https://github.com/facebookresearch/DPR>.\n> 2. Use the custom checkpoint and load it here\n> <https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L105>\n> (Is there any conversion I need to do before this step)\n>\n> Yes that's it.\n> To convert the DPR checkpoint from the original repo to transformers you\n> can use the script\n> src/transformers/convert_dpr_original_checkpoint_to_pytorch.py\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8037#issuecomment-718546578>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGQXQYM5S2CYMOQ3RWDSNEZCPANCNFSM4S6VBIUA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | Now the RAG consists of a [script](https://github.com/huggingface/transformers/blob/master/examples/rag/use_own_knowledge_dataset.py) where we can use a custom dataset other than the wiki-dataset.
Since, in the fine-tuning phase of the RAG, we do not update the doc-encoder (we update only BART and Question Encoder), what if our custom dataset consists of different distribution compared to the wiki dataset (Ex: medical records)?
Will it still work? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8037/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8036/comments | https://api.github.com/repos/huggingface/transformers/issues/8036/events | https://github.com/huggingface/transformers/pull/8036 | 729,087,836 | MDExOlB1bGxSZXF1ZXN0NTA5NjQyNTM0 | 8,036 | Add mixed precision evaluation | {
"login": "luyug",
"id": 55288513,
"node_id": "MDQ6VXNlcjU1Mjg4NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luyug",
"html_url": "https://github.com/luyug",
"followers_url": "https://api.github.com/users/luyug/followers",
"following_url": "https://api.github.com/users/luyug/following{/other_user}",
"gists_url": "https://api.github.com/users/luyug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luyug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luyug/subscriptions",
"organizations_url": "https://api.github.com/users/luyug/orgs",
"repos_url": "https://api.github.com/users/luyug/repos",
"events_url": "https://api.github.com/users/luyug/events{/privacy}",
"received_events_url": "https://api.github.com/users/luyug/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I see your point; Apex will take care of the other case. Updated!",
"Thanks!"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
Add flag and code to do mixed precision forward in trainer's `prediction_step` function.
Let evaluation (and prediction) to run faster.
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8036/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8036",
"html_url": "https://github.com/huggingface/transformers/pull/8036",
"diff_url": "https://github.com/huggingface/transformers/pull/8036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8036.patch",
"merged_at": 1603714352000
} |
https://api.github.com/repos/huggingface/transformers/issues/8035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8035/comments | https://api.github.com/repos/huggingface/transformers/issues/8035/events | https://github.com/huggingface/transformers/issues/8035 | 729,070,622 | MDU6SXNzdWU3MjkwNzA2MjI= | 8,035 | ModelUtilsTest.test_model_from_pretrained failiing on CUDA | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Assign me!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | CONTRIBUTOR | null | Seems as though an `architecture` key is being added.
Not sure who to assign this to @LysandreJik.
```python
__________________ ModelUtilsTest.test_model_from_pretrained ___________________
[gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python
self = <tests.test_modeling_common.ModelUtilsTest testMethod=test_model_from_pretrained>
@slow
def test_model_from_pretrained(self):
for model_name in BERT_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
config = BertConfig.from_pretrained(model_name)
self.assertIsNotNone(config)
self.assertIsInstance(config, PretrainedConfig)
model = BertModel.from_pretrained(model_name)
model, loading_info = BertModel.from_pretrained(model_name, output_loading_info=True)
self.assertIsNotNone(model)
self.assertIsInstance(model, PreTrainedModel)
for value in loading_info.values():
self.assertEqual(len(value), 0)
config = BertConfig.from_pretrained(model_name, output_attentions=True, output_hidden_states=True)
model = BertModel.from_pretrained(model_name, output_attentions=True, output_hidden_states=True)
self.assertEqual(model.config.output_hidden_states, True)
> self.assertEqual(model.config, config)
E AssertionError: BertConfig {
E "_name_or_path": "bert-base-uncased",
E "a[518 chars]22
E }
E != BertConfig {
E "architectures": [
E "BertForMaskedLM"
E [478 chars]22
E }
tests/test_modeling_common.py:1092: AssertionError
```
https://github.com/huggingface/transformers/runs/1303479917?check_suite_focus=true | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8035/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8034/comments | https://api.github.com/repos/huggingface/transformers/issues/8034/events | https://github.com/huggingface/transformers/issues/8034 | 729,070,356 | MDU6SXNzdWU3MjkwNzAzNTY= | 8,034 | DataCollatorIntegrationTest::test_nsp failing on GPU | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"Think that should just be renamed `\"labels\"`. Will look tomorrow and fix."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | @sgugger I believe you are the right person to tag?
```python
=================================== FAILURES ===================================
_____________________ DataCollatorIntegrationTest.test_nsp _____________________
[gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python
self = <tests.test_data_collator.DataCollatorIntegrationTest testMethod=test_nsp>
@slow
def test_nsp(self):
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
data_collator = DataCollatorForNextSentencePrediction(tokenizer)
dataset = TextDatasetForNextSentencePrediction(tokenizer, file_path=PATH_SAMPLE_TEXT, block_size=512)
examples = [dataset[i] for i in range(len(dataset))]
batch = data_collator(examples)
self.assertIsInstance(batch, dict)
# Since there are randomly generated false samples, the total number of samples is not fixed.
total_samples = batch["input_ids"].shape[0]
self.assertEqual(batch["input_ids"].shape, torch.Size((total_samples, 512)))
self.assertEqual(batch["token_type_ids"].shape, torch.Size((total_samples, 512)))
> self.assertEqual(batch["masked_lm_labels"].shape, torch.Size((total_samples, 512)))
E KeyError: 'masked_lm_labels'
```
https://github.com/huggingface/transformers/runs/1303479917?check_suite_focus=true | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8034/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8033/comments | https://api.github.com/repos/huggingface/transformers/issues/8033/events | https://github.com/huggingface/transformers/pull/8033 | 729,065,097 | MDExOlB1bGxSZXF1ZXN0NTA5NjI1NjAz | 8,033 | [cleanup] pegasus,marian,mbart pytorch tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | + Cleans up 3 pytorch test files
+ Faster (num_beams=2) pegasus integration test.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8033",
"html_url": "https://github.com/huggingface/transformers/pull/8033",
"diff_url": "https://github.com/huggingface/transformers/pull/8033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8033.patch",
"merged_at": 1603717146000
} |
https://api.github.com/repos/huggingface/transformers/issues/8032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8032/comments | https://api.github.com/repos/huggingface/transformers/issues/8032/events | https://github.com/huggingface/transformers/issues/8032 | 729,056,877 | MDU6SXNzdWU3MjkwNTY4Nzc= | 8,032 | Commit 121dd43 changes DialoGPT generation behavior | {
"login": "abisee",
"id": 14880223,
"node_id": "MDQ6VXNlcjE0ODgwMjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14880223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abisee",
"html_url": "https://github.com/abisee",
"followers_url": "https://api.github.com/users/abisee/followers",
"following_url": "https://api.github.com/users/abisee/following{/other_user}",
"gists_url": "https://api.github.com/users/abisee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abisee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abisee/subscriptions",
"organizations_url": "https://api.github.com/users/abisee/orgs",
"repos_url": "https://api.github.com/users/abisee/repos",
"events_url": "https://api.github.com/users/abisee/events{/privacy}",
"received_events_url": "https://api.github.com/users/abisee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @abisee , sorry for the inconvenience.\r\n\r\nEven though you did not pass in attention mask, it is created here: (first 2 lines)\r\nhttps://github.com/huggingface/transformers/blob/5148f433097915f30864bf0ca6090656fecefbb8/src/transformers/generation_utils.py#L352-L363\r\n\r\nchanging this\r\n`chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)`\r\nto \r\n`chat_history_ids = model.generate(bot_input_ids, max_length=1000, )`\r\nseems to solve the problem. (the pad_token_id will still be set to tokenizer.eos_token_id, but after attention_mask is set to all ones)\r\n\r\nHere is how the bug can happen: \r\nIf someone tries to \r\n* use eos_token_id in sentences\r\n* and also sets pad_token_id=eos_token_id\r\n* and attention mask is created like this (using positions of pad_token_id). (there is no problem when using tokenizer to create attention mask)\r\n\r\nDon't have a better solution for now, will think about it. \r\n@patrickvonplaten @LysandreJik What do you think?\r\nMaybe generate() should not create attention mask for users, but this can break other code, too.",
"Thanks for the response @cccntu!\r\n\r\nMy understanding is that both GPT2 and DialoGPT were trained without a pad token; i.e. neither model has a pad token embedding. In that case, why does the DialoGPT example code contain `pad_token_id=tokenizer.eos_token_id`? What's the purpose of doing this, if a pad token isn't needed for generation, and the EOS token was never used as a pad token during training?",
"**For generation**, it seems that attention masks are created automatically (if there's an assigned pad token that appears in the input). See `GenerationMixin.generate()`:\r\n```\r\n# create attention mask if necessary\r\n # TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140\r\n if (attention_mask is None) and (pad_token_id is not None) and (pad_token_id in input_ids):\r\n attention_mask = input_ids.ne(pad_token_id).long()\r\n elif attention_mask is None:\r\n attention_mask = input_ids.new_ones(input_ids.shape)\r\n```\r\nHowever **for training** (at least for GPT2 models), as far as I can tell, the attention mask is _not_ created automatically, even if there's an assigned pad token that appears in the input. \r\n\r\nThis seems like an unexpected discrepancy, and another reason to put the attention mask creation in the model's `forward` as proposed by @thomwolf in [PR 3140](https://github.com/huggingface/transformers/pull/3140).",
"That's a super interesting issue! Thanks for posting it here! \r\n\r\nSo in short, in order to be able to do batch_generation with GPT2 (or Beam Search), we have to use some kind of token as the `pad_token_id` in case one batch finishes early. We decided a while back that for GPT2 we will just use the `eos_token_id` as the `pad_token_id` in this case. \r\n\r\nJust as you guys noticed the problem lies in `generate()` automatically creating the `attention_mask` and falsely assuming the `eos_token_id` is a `pad_token_id` . \r\n\r\nIMO, it was a mistake to automatically create the `attention_mask` in `generate()` as it could lead to unexpected problems such as those! \r\n\r\nI'm currently doing a big `generate()` refactor and in this refactor the problem should be solved (see comment in PR linked below).\r\n\r\nI hope that I'll be able to merge the PR in ~1 week.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.4.0-127-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes (1 TITAN-XP)
- Using distributed or parallel set-up in script?: no
### Who can help
@cccntu @patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): DialoGPT-large
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Checkout [121dd43](https://github.com/huggingface/transformers/commit/121dd43).
2. Run the DialoGPT "How to use" code given [here](https://huggingface.co/microsoft/DialoGPT-medium), but change `DialoGPT-medium` to `DialoGPT-large`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
3. For the user's first utterance, type "Hello, how are you?". I get this output:
```
>> User:Hello, how are you?
DialoGPT: 're you a fan of the show?
```
Note: this problem is still present in the current version of master (`5148f43`).
## Expected behavior
With the previous commit, `0c64b18`, I get this output:
```
>> User:Hello, how are you?
DialoGPT: I'm good, you?
```
## Possible cause
The issue seems to be related to the `<|endoftext|>` token, which is used at the end of every utterance. This is being regarded as a padding token, and thus it's attention-masked, which also seems to affect the position ids. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8031/comments | https://api.github.com/repos/huggingface/transformers/issues/8031/events | https://github.com/huggingface/transformers/pull/8031 | 729,050,530 | MDExOlB1bGxSZXF1ZXN0NTA5NjE0ODQy | 8,031 | [fix] FSMT slow test uses lists instead of torch tensors | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | `test_match_encode_decode` is failing in test_tf mode because it depends on torch. This removes the dependency.
cc @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8031/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8031/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8031",
"html_url": "https://github.com/huggingface/transformers/pull/8031",
"diff_url": "https://github.com/huggingface/transformers/pull/8031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8031.patch",
"merged_at": 1603715557000
} |
https://api.github.com/repos/huggingface/transformers/issues/8030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8030/comments | https://api.github.com/repos/huggingface/transformers/issues/8030/events | https://github.com/huggingface/transformers/pull/8030 | 729,016,560 | MDExOlB1bGxSZXF1ZXN0NTA5NTg5NTEw | 8,030 | Getting Hosted inference API working? | {
"login": "longenbach",
"id": 43459399,
"node_id": "MDQ6VXNlcjQzNDU5Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/43459399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longenbach",
"html_url": "https://github.com/longenbach",
"followers_url": "https://api.github.com/users/longenbach/followers",
"following_url": "https://api.github.com/users/longenbach/following{/other_user}",
"gists_url": "https://api.github.com/users/longenbach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/longenbach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/longenbach/subscriptions",
"organizations_url": "https://api.github.com/users/longenbach/orgs",
"repos_url": "https://api.github.com/users/longenbach/repos",
"events_url": "https://api.github.com/users/longenbach/events{/privacy}",
"received_events_url": "https://api.github.com/users/longenbach/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"@longenbach `pipeline_tag` expects a single string, not an array of string.\r\n\r\nNote that you wouldn't need any of the tags or pipeline_tag (they would be detected automatically) if your `config.json` contained:\r\n\r\n```json\r\n{\r\n ...\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"model_type\": \"bert\"\r\n}\r\n```\r\n\r\nWe'll try to make that clearer in a next iteration.",
"Nice!\r\n\r\n<img width=\"749\" alt=\"Screenshot 2020-10-26 at 09 38 41\" src=\"https://user-images.githubusercontent.com/326577/97150874-27a15980-1745-11eb-8017-e809af886925.png\">\r\n",
"@julien-c it works 🤗 Thanks for the insight on the documentation. So you are saying we can avoid making a **model card** if you include that JSON chunk in your uploaded config.json file?\r\n\r\nIn case others find confusion with the **Hosted inference API**. Below is the YAML section of my model card that works: \r\n\r\n```html\r\n---\r\nlanguage: da\r\ntags:\r\n- bert\r\n- masked-lm\r\n- lm-head\r\nlicense: cc-by-4.0\r\ndatasets:\r\n- common_crawl\r\n- wikipedia\r\npipeline_tag: fill-mask\r\nwidget:\r\n- text: \"København er [MASK] i Danmark.\"\r\n---\r\n```\r\n"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | Trying to get Hosted inference API to work. Was following https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046 ... is below the correct YAML syntax?
pipeline:
-fill-mask
widget:
-text: "København er [mask] i Danmark."
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8030",
"html_url": "https://github.com/huggingface/transformers/pull/8030",
"diff_url": "https://github.com/huggingface/transformers/pull/8030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8030.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8029/comments | https://api.github.com/repos/huggingface/transformers/issues/8029/events | https://github.com/huggingface/transformers/issues/8029 | 728,999,112 | MDU6SXNzdWU3Mjg5OTkxMTI= | 8,029 | BlenderbotSmallTokenizer throws tuple index out of range error for stopword | {
"login": "Sumegh-git",
"id": 37850881,
"node_id": "MDQ6VXNlcjM3ODUwODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/37850881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sumegh-git",
"html_url": "https://github.com/Sumegh-git",
"followers_url": "https://api.github.com/users/Sumegh-git/followers",
"following_url": "https://api.github.com/users/Sumegh-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Sumegh-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sumegh-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sumegh-git/subscriptions",
"organizations_url": "https://api.github.com/users/Sumegh-git/orgs",
"repos_url": "https://api.github.com/users/Sumegh-git/repos",
"events_url": "https://api.github.com/users/Sumegh-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sumegh-git/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | NONE | null | Using transformers==3.4.0
Script used:
```
from transformers import BlenderbotSmallTokenizer, BlenderbotForConditionalGeneration
mname = 'facebook/blenderbot-90M'
tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname)
sentence = "."
tokenizer(sentence)['input_ids']
```
This throws `IndexError: tuple index out of range` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8029/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8028/comments | https://api.github.com/repos/huggingface/transformers/issues/8028/events | https://github.com/huggingface/transformers/issues/8028 | 728,996,866 | MDU6SXNzdWU3Mjg5OTY4NjY= | 8,028 | [BUG] Unexpected overflowing_tokens in tokenizer.encode_plus | {
"login": "wangxinyu0922",
"id": 17926734,
"node_id": "MDQ6VXNlcjE3OTI2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/17926734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangxinyu0922",
"html_url": "https://github.com/wangxinyu0922",
"followers_url": "https://api.github.com/users/wangxinyu0922/followers",
"following_url": "https://api.github.com/users/wangxinyu0922/following{/other_user}",
"gists_url": "https://api.github.com/users/wangxinyu0922/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangxinyu0922/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangxinyu0922/subscriptions",
"organizations_url": "https://api.github.com/users/wangxinyu0922/orgs",
"repos_url": "https://api.github.com/users/wangxinyu0922/repos",
"events_url": "https://api.github.com/users/wangxinyu0922/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangxinyu0922/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I confirm the issue. It was ok with transformers 3.0.0, but from 3.1.0 it is changed.",
"And the code:\r\nhttps://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L2558-L2571\r\nlooks bugged, despite above: `ids = ids[:-1]` should be `ids = ids[:-window_len]`.",
"Pinging @thomwolf ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | ### Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?):
- Using GPU in script?: True
- Using distributed or parallel set-up in script?:
### Who can help
tokenizers: @mfuntowicz
## Information
When I am using BERT tokenizer, I get unexpected `overflowing_tokens`. Here is a example code to reproduce:
## To reproduce
```
import torch
import transformers
from transformers import AutoTokenizer
import pdb
tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased')
subtoken_ids_sentence = [x for x in range(1000,1050)]
nr_sentence_parts += 1
encoded_inputs = tokenizer.encode_plus(subtoken_ids_sentence,
max_length=40,
stride=20,
return_overflowing_tokens=True,
truncation=True,
)
print(encoded_inputs['overflowing_tokens'])
```
The output is: `[1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1048, 1047, 1046, 1045, 1044, 1043, 1042, 1041, 1040, 1039, 1038]`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The expected behavior I want is:
`[1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049]`
The current output contains `[1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049]` and an additional reversed Tensor of `[1048, 1047, 1046, 1045, 1044, 1043, 1042, 1041, 1040, 1039, 1038]`, which I think is wrong.
When I dig into the code, I find that:
https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L2556-L2564
I wonder why there is a for loop in it and I think I need `truncation_strategy = TruncationStrategy.ONLY_FIRST`. However, I failed to turn the truncation_stractegy to `only_first` because the code here turn the truncation strategy to `longest_first`.
https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L1750-L1759
Can you give me any help?
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8028/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8027/comments | https://api.github.com/repos/huggingface/transformers/issues/8027/events | https://github.com/huggingface/transformers/issues/8027 | 728,994,446 | MDU6SXNzdWU3Mjg5OTQ0NDY= | 8,027 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I got a same problem as you. I install the transformer from source but still has this problem. @sgugger ",
"I had the same problem, but this is how I solved it\r\n1、Access the address and download the file:\r\nhttps://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/bookcorpus/bookcorpus.py\r\n2、Put the file in this position:\r\n\r\n3、Then you can load the corpus normally. I hope it can help you\r\n"
] | 1,603 | 1,619 | 1,603 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset
module_path, hash = prepare_module(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path
output_path = get_from_cache(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache
raise ConnectionError(“Couldn’t reach {}”.format(url))
ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8027/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8026/comments | https://api.github.com/repos/huggingface/transformers/issues/8026/events | https://github.com/huggingface/transformers/pull/8026 | 728,961,783 | MDExOlB1bGxSZXF1ZXN0NTA5NTUxMTQ0 | 8,026 | [Model Card] new cross lingual sentence model for German and English | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"@julien-c The small subsequent adjustments are done. \r\nI would be happy if it could be merged.\r\n\r\nThanks a lot\r\nPhilip"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | - add new model card
- adapted model cards of our other sentence embeddings | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8026/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8026",
"html_url": "https://github.com/huggingface/transformers/pull/8026",
"diff_url": "https://github.com/huggingface/transformers/pull/8026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8026.patch",
"merged_at": 1603738106000
} |
https://api.github.com/repos/huggingface/transformers/issues/8025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8025/comments | https://api.github.com/repos/huggingface/transformers/issues/8025/events | https://github.com/huggingface/transformers/pull/8025 | 728,901,057 | MDExOlB1bGxSZXF1ZXN0NTA5NTEwNjg1 | 8,025 | Create README.md | {
"login": "longenbach",
"id": 43459399,
"node_id": "MDQ6VXNlcjQzNDU5Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/43459399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longenbach",
"html_url": "https://github.com/longenbach",
"followers_url": "https://api.github.com/users/longenbach/followers",
"following_url": "https://api.github.com/users/longenbach/following{/other_user}",
"gists_url": "https://api.github.com/users/longenbach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/longenbach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/longenbach/subscriptions",
"organizations_url": "https://api.github.com/users/longenbach/orgs",
"repos_url": "https://api.github.com/users/longenbach/repos",
"events_url": "https://api.github.com/users/longenbach/events{/privacy}",
"received_events_url": "https://api.github.com/users/longenbach/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"we should probably validate model ids (to something like `[\\w\\d-_]{3,}`) @Pierrci, mind creating an issue for this?"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8025",
"html_url": "https://github.com/huggingface/transformers/pull/8025",
"diff_url": "https://github.com/huggingface/transformers/pull/8025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8025.patch",
"merged_at": 1603610447000
} |
https://api.github.com/repos/huggingface/transformers/issues/8024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8024/comments | https://api.github.com/repos/huggingface/transformers/issues/8024/events | https://github.com/huggingface/transformers/issues/8024 | 728,889,733 | MDU6SXNzdWU3Mjg4ODk3MzM= | 8,024 | T5 on multiple tasks | {
"login": "yes1234man",
"id": 59166627,
"node_id": "MDQ6VXNlcjU5MTY2NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/59166627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yes1234man",
"html_url": "https://github.com/yes1234man",
"followers_url": "https://api.github.com/users/yes1234man/followers",
"following_url": "https://api.github.com/users/yes1234man/following{/other_user}",
"gists_url": "https://api.github.com/users/yes1234man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yes1234man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yes1234man/subscriptions",
"organizations_url": "https://api.github.com/users/yes1234man/orgs",
"repos_url": "https://api.github.com/users/yes1234man/repos",
"events_url": "https://api.github.com/users/yes1234man/events{/privacy}",
"received_events_url": "https://api.github.com/users/yes1234man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,609 | 1,609 | NONE | null | Dear Huggingface team,
I am looking for a code to pretrain T5 on multiple tasks, the best I could find is the code released with wrapper around huggingface T5 model in the original author's repo:
https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py
On line 308-310, they create tensorflow datasets I think, and pass it to huggingface model, I was wondering if I could ask for your help, if one wants to add data parallelism to this code to make it efficient in pytorch, could you help me how I can do it in pytorch? thanks a lot, and I appreciate your help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8024/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8023/comments | https://api.github.com/repos/huggingface/transformers/issues/8023/events | https://github.com/huggingface/transformers/pull/8023 | 728,830,370 | MDExOlB1bGxSZXF1ZXN0NTA5NDYxMTgx | 8,023 | Tiny TF Bart fixes | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Tiny TF Bart fixes | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8023/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8023",
"html_url": "https://github.com/huggingface/transformers/pull/8023",
"diff_url": "https://github.com/huggingface/transformers/pull/8023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8023.patch",
"merged_at": 1603718996000
} |
https://api.github.com/repos/huggingface/transformers/issues/8022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8022/comments | https://api.github.com/repos/huggingface/transformers/issues/8022/events | https://github.com/huggingface/transformers/issues/8022 | 728,823,097 | MDU6SXNzdWU3Mjg4MjMwOTc= | 8,022 | [test] tests/test_modeling_deberta.py breaks on pytorch-nightly | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I couldn't reproduce this against 1.6. When I run it against 1.7, I get:\r\n\r\n /data/users/gchanan/transformers/src/transformers/modeling_deberta.py:574: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /opt/conda/conda-bld/pytorch_1601363278767/work/torch/csrc/autograd/variable.cpp:480.)",
"Here's a workaround: change\r\nhttps://github.com/huggingface/transformers/blob/5148f433097915f30864bf0ca6090656fecefbb8/src/transformers/modeling_deberta.py#L574\r\n\r\nto:\r\n`query_layer = query_layer + self.transpose_for_scores(self.q_bias[None, None, :])`\r\n\r\ni.e. make the modification out-of-place. It might be better to do what is in the warning and change the `split` to `unsafe_split`, but I haven't tested that.",
"Thank you very much, @gchanan! That solved the problem.",
"Also pasting a more details explanation from slack by @albanD:\r\n\r\n> The warning that is raised just before tells you what the issue is and tells you that this won't be allowed soon as it can crash (as you have seen). Removing the inplace is the right fix here.\r\nThe reason here is that split and chunk were fixed to properly return view (and avoid silently wrong gradients). But Inplace on the result is not handled by the autograd and will soon raise an error.\r\nWe have left unsafe_split and unsafe_chunk (both in python and c++) if you need the old behavior while you fix your code to avoid the inplace."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | pytorch-nightly and pytorch-1.7-candidate break these:
```
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSER...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL AS...
```
looks like a bug in pytorch, right?
```
RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda- \
bld/pytorch_1603436966316/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
```
log:
```
================================================================ test session starts ================================================================
platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- /home/stas/anaconda3/envs/main-38/bin/python
cachedir: .pytest_cache
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: typeguard-2.10.0, forked-1.3.0, xdist-2.1.0, instafail-0.4.2
collecting ... 2020-10-24 09:21:07.169605: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
collected 1 item
tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model FAILED
________________________________________________________ DebertaModelTest.test_deberta_model ________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_deberta_model>
def test_deberta_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_deberta_model(*config_and_inputs)
tests/test_modeling_deberta.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_deberta.py:159: in create_and_check_deberta_model
sequence_output = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids)[0]
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-1.5951, -1.0046, 0.5641, ..., -0.4472, 0.0159, -0.1435],
[-1.3476, -0.1559, -1.3866, ..., 1.2... [ 0.5269, -0.0601, -0.4018, ..., -0.1616, 0.5335, -0.8894]]],
device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1,..., 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1603436966316/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
============================================================== short test summary info ==============================================================
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED...
=========================================================== 1 failed, 5 warnings in 4.22s ===========================================================
```
## Environment info
```
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201023 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
for pytorch-1.7 candidate I used:
https://download.pytorch.org/whl/test/cu102/torch-1.7.0-cp38-cp38-linux_x86_64.whl
----------------------
**update**: The release team https://github.com/pytorch/pytorch/issues/45592 has been notified of this issue on pytorch slack. Waiting to hear back from them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8022/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8022/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.