repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 8,102 | closed | Adjust setup so that all extras run on Windows | # What does this PR do?
This removes some of the extra deps if they don't exist on Windows, so that the install doesn't fail.
| 10-27-2020 18:37:29 | 10-27-2020 18:37:29 | |
transformers | 8,101 | closed | should be BartConfig.prefix None? | For bart-large-xsum/bart-large-cnn, currently set to `' '`
+ does not help evaluation performance vs setting to `None` (checked xsum and CNN)
+ It could help fine-tuning performance by serving as a work-around for the `add_prefix_space` issue?
Putting here in case others have thoughts.
| 10-27-2020 17:08:23 | 10-27-2020 17:08:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,100 | closed | Rename swish to silu to give appropriate credit | The swish was originally coined the "SiLU" in https://arxiv.org/pdf/1606.08415.pdf and https://arxiv.org/abs/1702.03118 long before the swish paper. Renaming other peoples' exact same ideas is unacceptable and huggingface's naming convention implicitly erases the research and work of people outside of Google.
This request inspired by a [discussion](https://www.reddit.com/r/MachineLearning/comments/hkiyir/r_google_has_a_credit_assignment_problem_in/) and a recent [tensorflow issue](https://github.com/tensorflow/tensorflow/issues/41066), but this problem has been brought up every few months for the past few years. In light of recent efforts to make the ML community more equitable and _fair_, this is a no-brainer and long overdue.
**Will this change the current api? How?**
The API would replace the "swish" argument with the "silu" argument and deprecate the swish.
[PyTorch 1.7](https://pytorch.org/docs/1.7.0/generated/torch.nn.SiLU.html?highlight=silu) added the SiLU. Tensorflow added the [SiLU](https://github.com/tensorflow/tensorflow/blob/27d26a8d86bceda282ad9ba3e3116a00759d4ebc/tensorflow/python/ops/nn_impl.py#L517) and should be in the next version. Jax has already added the SiLU;
jax.nn.swish will eventually be deprecated and jax.nn.silu will be added and both of the aforementioned papers will be cited in the documentation. | 10-27-2020 16:46:34 | 10-27-2020 16:46:34 | It is worth mentioning that PyTorch's SiLU op is an optimized implementation: https://github.com/pytorch/pytorch/pull/42976<|||||>You are correct, we should update and deprecate. Do you want to take a stab at a PR? |
transformers | 8,099 | closed | Reformer implementation in Tensorflow | # 🚀 Feature request
Since there is an implementation of the Reformer in Pytorch, my question is if there will be an implementation for Tensorflow too?
| 10-27-2020 15:40:43 | 10-27-2020 15:40:43 | This would be cool! I don't believe it's on our roadmap currently (cc @patrickvonplaten), but should be part of a general TF overhaul we'll be doing in the coming months.
No date for that yet, but I think you can expect it in the future.<|||||>Could be a "Good First Issue" :D Yeah, I think this is not really on our roadmap because `Reformer` is a pretty complicated model (It tweaks the backprop pass) and does not really have pretrained weights :-/ <|||||>Sounds like a good first project to deep dive in TensorFlow and rip a few hair out in the process :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,098 | closed | RuntimeError: Trying to create tensor with negative dimension | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@TevenLeScao
## Information
I am using TransfoXLModel. The problem arises when running the code below (if I do not fill in vocab_size=256, it works fine):
* the example scripts:
```python
from transformers import TransfoXLConfig, TransfoXLModel
configuration = TransfoXLConfig(vocab_size=256)
model = TransfoXLModel(configuration)
```
## Error I get:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-323-7039580347ad> in <module>
3 configuration = TransfoXLConfig(vocab_size=256)
4 # Initializing a model from the configuration
----> 5 model = TransfoXLModel(configuration)
/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py in __init__(self, config)
736
737 self.word_emb = AdaptiveEmbedding(
--> 738 config.vocab_size, config.d_embed, config.d_model, config.cutoffs, div_val=config.div_val
739 )
740
/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py in __init__(self, n_token, d_embed, d_proj, cutoffs, div_val, sample_softmax)
421 l_idx, r_idx = self.cutoff_ends[i], self.cutoff_ends[i + 1]
422 d_emb_i = d_embed // (div_val ** i)
--> 423 self.emb_layers.append(nn.Embedding(r_idx - l_idx, d_emb_i))
424 self.emb_projs.append(nn.Parameter(torch.FloatTensor(d_proj, d_emb_i)))
425
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/sparse.py in __init__(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight)
107 self.scale_grad_by_freq = scale_grad_by_freq
108 if _weight is None:
--> 109 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
110 self.reset_parameters()
111 else:
RuntimeError: Trying to create tensor with negative dimension -199744: [-199744, 16]
| 10-27-2020 15:21:13 | 10-27-2020 15:21:13 | I'm experiencing the same issue. Setting `vocab_size=tokenizer.vocab_size` does not help.
I've noticed if I artificially inflate the vocab_size (i.e. `vocab_size=tokenizer.vocab_size+1`), the negative dimension in the error will also be greater by 1 (ie `-199743` instead of `-199744`.<|||||>Hey! From your description it sounds like you haven't changed the cutoff points for adaptive embeddings. (the different sizes of the clusters for the hierarchical softmax generation). This causes an issue as the last cluster of embeddings, the one for the least frequent words, has size `vocab_size - cutoffs[-1]` so if the last cutoff is bigger than the vocab size, that's negative.
Now for only 256 vocab words, adaptive embeddings don't really matter anyway, so I'd recommend running
```
from transformers import TransfoXLConfig, TransfoXLModel
configuration = TransfoXLConfig(vocab_size=256, cutoffs=[])
model = TransfoXLModel(configuration)
```<|||||>This worked for me, thanks a lot @TevenLeScao ! If you had a larger vocab size would you just recommend setting the last cutoff to be `0 < cutoff < vocab_size`?
<|||||>Ah actually I re-read the code and docs and the two ends of the cutoffs are already provided; they're appended later, so what you want is actually `cutoffs=[]`, even if `cutoffs=[0, 256]` seems to work anyway (I've edited my previous answer).
In any case, to answer your question, yes, for a larger vocab size it is actually quite helpful to have `0 < cutoff < vocab_size`! Personally I start considering doing that for a vocabulary on the order of a few tens of thousands - so like 40000 for example, but your mileage may vary, I recommend experimenting yourself and checking whether it makes a difference :) it should mostly help with memory use.<|||||>Awesome, thanks for the helpful advice! I'd posted about the same issue on https://discuss.huggingface.co/t/transfoxllmheadmodel-trying-to-create-tensor-with-negative-dimension-199500/1768/2 but it remained unanswered, so I linked your comment in that thread as a solution.<|||||>@TevenLeScao Thanks very much, it works great for me, close the issue now. |
transformers | 8,097 | closed | [wip/s2s] Aggregate Rouge Deterministically | Take randomness/sampling out of `calculate_rouge_score`.
Not ready for merge as the default should be changed back to not using this.
| 10-27-2020 14:59:31 | 10-27-2020 14:59:31 | |
transformers | 8,096 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 14:32:35 | 10-27-2020 14:32:35 | |
transformers | 8,095 | closed | Fix IterableDataset with __len__ in Trainer | # What does this PR do?
Fix #8087
Bring back support for `IterableDataset` with `__len__` in Trainer. Changed in #7858
@sgugger | 10-27-2020 13:36:40 | 10-27-2020 13:36:40 | |
transformers | 8,094 | closed | Documentation error in question-answering pipeline | Hi,
The [QuestionAnsweringPipeline](https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines#transformers.QuestionAnsweringPipeline.__call__) is returning the start and end position of the context string and not according to "the tokenized version of the input." as mentioned in the doc.
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
qa_pipeline = pipeline("question-answering")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models?"
]
result = qa_pipeline(question=questions, context=text)
print(result)
#this is the correct answer
print(text[int(result["start"]):int(result["end"])])
#this is not correct
print(tokenizer.tokenize(text[int(result["start"]):int(result["end"])))
```
Sorry if I misunderstood the doc | 10-27-2020 13:09:13 | 10-27-2020 13:09:13 | You are correct! Do you want to open a PR fixing the docs?<|||||>Sure, I'll do that. Closing this issue for now, I'll refer to this in the PR.<|||||>There is one more thing, sorry I didn't realize this earlier, and please let me know if I am wrong, The [span_to_answer](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.QuestionAnsweringPipeline.span_to_answer) method of this pipeline object doesn't add much value, because the object itself when called with `question` and `context` will return the start and end indexes of the answer in the context. Moreover, it will give the wrong results because `span_to_answer` is expecting token index and not string indexes.
In continuation to above code;
```python
print(len(tokenizer.tokenize(text)))
# output: 96
print(qa_pipeline.span_to_answer(text=text,start=int(result[0]["start"]), end=int(result[0]["end"])))
# this will print {'answer': '', 'start': 0, 'end': 0} because the start index of the answer is 256 and end index is 264 while the
# tokenized length is 96, this line in the function will stop the loop
# if token_idx > end:
# break
```
This part in `__call__` [method](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.QuestionAnsweringPipeline.__call__) is already taking care of remapping to string indexes:
```python
# Convert the answer (tokens) back to the original text
answers += [
{
"score": score.item(),
"start": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),
"end": np.where(char_to_word == feature.token_to_orig_map[e])[0][-1].item(),
"answer": " ".join(
example.doc_tokens[feature.token_to_orig_map[s] : feature.token_to_orig_map[e] + 1]
),
}
```
Unless we have some method to get the token index, this method I think will not work.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,093 | closed | Fully remove codecov | Fully removes codecov as we don't have any full test suite in circleci, alongside https://github.com/huggingface/transformers/commit/829b9f8cc321aa28396e6203e0f21eed26b132f7
If we want to put it back up, we can merge the two slow tests (TF + PT) and run coverage on that, but we should first take care of the inconsistencies in coverage as explained in https://github.com/huggingface/transformers/issues/6317
cc @stas00 | 10-27-2020 12:34:14 | 10-27-2020 12:34:14 | If we try to go back to using it, perhaps there is a way to merge reports from 2 half-full tests - otherwise it creates an unnecessary slowdown on CI to run both together if we don't need to. |
transformers | 8,092 | closed | Fix DeBERTa docs | Fix the DeBERTa docs | 10-27-2020 12:10:34 | 10-27-2020 12:10:34 | |
transformers | 8,091 | closed | Fix assertion error message for MLflowCallback | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 12:03:26 | 10-27-2020 12:03:26 | |
transformers | 8,090 | closed | Update README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 11:13:02 | 10-27-2020 11:13:02 | |
transformers | 8,089 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 11:11:35 | 10-27-2020 11:11:35 | |
transformers | 8,088 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 11:10:07 | 10-27-2020 11:10:07 | |
transformers | 8,087 | closed | #7858 breaks IterableDataset with __len__ in Trainer | https://github.com/huggingface/transformers/blob/08f534d2da47875a4b7eb1c125cfa7f0f3b79642/src/transformers/trainer.py#L381-L382
This used to be (before #7858)
```python
if isinstance(self.train_dataset, torch.utils.data.IterableDataset):
```
I am using IterableDataset with __len__ in Trainer. This change makes it return a sampler and results in an error later. `ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7fa32c57b340>`
Maybe change to this?
```python
if (isinstance(self.train_dataset, torch.utils.data.IterableDataset) or
not isinstance(self.train_dataset, collections.abc.Sized)):
```
@j-rossi-nl @sgugger
| 10-27-2020 10:43:20 | 10-27-2020 10:43:20 | I'm sorry, but you could explain to me why an `IterableDataset` with a `__len__` is not a regular `Dataset`?<|||||>In my case, I wrap a `Dataset` using a class that inherits `IterableDataset`, and defines a `__len__()`.
The purpose is to implement smart batching[1]. I use `IterableDataset` so I can control how to iterate the data.
I don't know if it's possible/easier to `Dataset`+`Sampler`, if so please let me know.
Also note that (after the change) if I drop `__len__()` to suppress the bug, I would then need to specify `max_iter` (or something like that), which is inconvenient.
[1] (https://wandb.ai/pommedeterresautee/speed_training/reports/Train-HuggingFace-Models-Twice-As-Fast--VmlldzoxMDgzOTI)
<|||||>It does seem a bit hacky but I guess we can add that test. Do you want to suggest a PR with the change? |
transformers | 8,086 | closed | Hello world example fail with transformers-3.4 | ## Environment info
- `transformers` version:3.4
- Platform: Mac
- Python version: 3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
Download all the code of branch-3.4.0
```
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-tokenizer.json")
model = TFAutoModel.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-pytorch_model.bin")
inputs = tokenizer("Hello world!", return_tensors="tf")
outputs = model(**inputs)
print(outputs)
```
Model downloaded from https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/
Get error:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```
in line 4 | 10-27-2020 10:15:13 | 10-27-2020 10:15:13 | We will probably need at least the full error trace<|||||>```
Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated
Special tokens have been added in the vocabulary, make sure the associated word embedding are fine-tuned or trained.
Traceback (most recent call last):
File "/Users/gt/Desktop/transformers-3.4.0/my_code/test.py", line 4, in <module>
model = TFAutoModel.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-pytorch_model.bin")
File "/Users/gt/Desktop/transformers-3.4.0/src/transformers/modeling_tf_auto.py", line 493, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/Users/gt/Desktop/transformers-3.4.0/src/transformers/configuration_auto.py", line 330, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/Users/gt/Desktop/transformers-3.4.0/src/transformers/configuration_utils.py", line 374, in get_config_dict
config_dict = cls._dict_from_json_file(resolved_config_file)
File "/Users/gt/Desktop/transformers-3.4.0/src/transformers/configuration_utils.py", line 456, in _dict_from_json_file
text = reader.read()
File "/Users/gt/Py36-tf1.4/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```<|||||>This line: `model = TFAutoModel.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/bert-base-uncased-pytorch_model.bin")`
should point to a directory contining both the model file and the configuration. Also, you're loading a `pytorch_model.bin` in a `TFAutoModel`, whereas this is a TensorFlow automodel.
- You should make sure that you're loading from a directory containing either `(pytorch_model.bin, config.json)` for PyTorch, or `(tf_model.h5, config.json)` for TensorFlow
- You can load a PyTorch model in TensorFlow, but you should specify `from_pt=True`, and you can load a TensorFlow model in PyTorch but you should specify the `from_tf=True` option.
You can find more information about this in the [quick tour](https://huggingface.co/transformers/quicktour.html#under-the-hood-pretrained-models).<|||||>@LysandreJik Thank you but
```
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/")
model = TFAutoModel.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/")
inputs = tokenizer("Hello world!", return_tensors="tf")
outputs = model(**inputs)
print(outputs)
```

```
Traceback (most recent call last):
File "/Users/gt/Desktop/transformers-3.4.0/my_code/test.py", line 3, in <module>
tokenizer = AutoTokenizer.from_pretrained("/Users/gt/Desktop/transformers-3.4.0/my_code/")
File "/Users/gt/Desktop/transformers-3.4.0/src/transformers/tokenization_auto.py", line 333, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/Users/gt/Desktop/transformers-3.4.0/src/transformers/tokenization_utils_base.py", line 1591, in from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name '/Users/gt/Desktop/transformers-3.4.0/my_code/' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed '/Users/gt/Desktop/transformers-3.4.0/my_code/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
```<|||||>Hello, as said before you need your files to be correctly named. Your model should be `pytorch_model.bin` or `tf_model.h5`, your configuration `config.json`, and your tokenizer should also be pointing to a file that has an appropriate name. You seem to be loading a `bert-base-cased` model, which should be used with a `BertTokenizer` that uses `vocab.txt` files, as it is shown in the error.<|||||>Thank you. |
transformers | 8,085 | closed | Merge pull request #1 from huggingface/master | Version track
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 08:45:29 | 10-27-2020 08:45:29 | Closing as I believe this is an error :) |
transformers | 8,084 | closed | Fix tf export path type in notebooks/04-onnx-export.ipynb | 10-27-2020 06:36:14 | 10-27-2020 06:36:14 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
|
transformers | 8,083 | closed | FastFormers into transformers | # 🌟 New model addition
## Model description
We just open-sourced [FastFormers](https://arxiv.org/abs/2010.13382) which are our SustaiNLP 2020 systems (FastFormers: Highly Efficient Transformer Models for Natural Language Understanding [paper](https://arxiv.org/abs/2010.13382)).
Currently, we are hosting this on our repository, but would like to merge it back to the transformers repository as an example.
our repo - https://github.com/microsoft/fastformers
For the purpose of the shared task, this is purely implemented with SuperGLUE data set.
So, it's dependent on Alex Wang(@W4ngatang)'s SuperGLUE data processing pipeline.
Also, many parts of the implementations are based on Alex'.
(https://github.com/W4ngatang/transformers/tree/superglue)
What would be the best way to merge this back?
## Open source status
* [x] the model implementation is available: https://github.com/microsoft/fastformers/blob/main/examples/fastformers/run_superglue.py
* [x] the model weights are available: demo systems are uploaded. https://github.com/microsoft/fastformers/releases/tag/v0.1-model
* [x] who are the authors: @ykim362
| 10-27-2020 04:51:44 | 10-27-2020 04:51:44 | Hi! This is great, thanks for offering to contribute it! From what I understand, `FastFormers` contains several scripts that can be applied to `transformers` models out of the box, that is, training, distillation, pruning, using quantization alongside the onnx runtime and fp16 optimizations.
Is that correct? If that is so, the easiest way would be to add the corresponding scripts to the `examples/` directory, probably under `examples/fastformers`. If there are modifications made to the model themselves, we can take a look together at how we can integrate those in the library.<|||||>Hi, thanks for your interest! From what I understand, I think your model falls in the category of dynamic acceleration. For these types of paper, I recommend you to integrate it to `examples/`, just like [PABEE](https://github.com/huggingface/transformers/tree/master/examples/bert-loses-patience) and [DeeBERT](https://github.com/huggingface/transformers/tree/master/examples/deebert). I've emailed you an invitation to our Slack channel if it works for you. cc @LysandreJik <|||||> @LysandreJik yes, that is correct. Thanks @JetRunner, let's discuss more on the slack.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am sorry, but I have been fully loaded with some other stuffs. I won't be able to make a progress. I'd like to close this to avoid any confusion. |
transformers | 8,082 | closed | Fix doc examples | Fix many `{model_class}.from_pretrained())` -> `{model_class}.from_pretrained()`. Hope it helps.
documentation: @sgugger
| 10-27-2020 03:20:10 | 10-27-2020 03:20:10 | |
transformers | 8,081 | closed | Move style_doc to extra_quality_checks | Previously, if you have a doc style error in a .rst file,
```bash
python utils/style_doc.py $(modified_py_files) --max_len 119;
```
wouldn't catch it | 10-27-2020 03:06:33 | 10-27-2020 03:06:33 | Ideally there should be a modified_py_and_rst_files to speed up this check (and only apply it to modified files), but this works in the meantime. @stas00 if you want to do that last bit of optimization, let me know, otherwise I'll do that later.<|||||>Go for it, @sgugger. The modified files var is there, so it should be easy to apply it anywhere.
If you get stuck I'm here to help. |
transformers | 8,080 | closed | Pre style | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 03:02:23 | 10-27-2020 03:02:23 | |
transformers | 8,079 | closed | Best practice to use this great repo for industry application. | # ❓ Questions & Help
To download all the source code and put them into pycharm or To use pip-install and use the API code?
## Details
I want to pretrain and fine-tune the models here on our own dataset. | 10-27-2020 02:59:29 | 10-27-2020 02:59:29 | download all the source code |
transformers | 8,078 | closed | Hope more GPT Chinese pretrained model. | # 🚀 Feature request
Hope more GPT Chinese pretrained model.

Thank you very much. | 10-27-2020 02:55:21 | 10-27-2020 02:55:21 | Have you checked out the filtered list on the model hub? https://huggingface.co/models?filter=zh <|||||>Thank you! |
transformers | 8,077 | closed | Longformer crashes for position embeddings indexing? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: apex ddp
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> @patrickvonplaten maybe?
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. use apex ddp with longformerforsequenceclassification
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
my code snippet:
```python
def train(self):
self.model.train()
losses = []
if isinstance(self.train_loader.sampler, DistributedSampler):
self.train_loader.sampler.set_epoch(self.epoch)
for qids, dids, queries, documents, y in self.train_loader:
encoded = self._tokenizer.batch_encode_plus(batch_text_or_text_pairs=list(zip(queries, documents)),
truncation="longest_first", add_special_tokens=True,
max_length = self.max_len, padding="max_length",
is_pretokenized=False, return_tensors="pt",
return_attention_mask=True, return_token_type_ids=True)
input_ids = encoded["input_ids"].cuda()
attention_mask = encoded["attention_mask"].cuda()
token_type_ids = encoded["token_type_ids"].cuda()
y = torch.tensor(y).unsqueeze(1).cuda()
global_attention_mask = self.get_global_attention(encoded["input_ids"], self.max_len, self._tokenizer.sep_token_id)[0].cuda()
self.optimizer.zero_grad()
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
global_attention_mask=global_attention_mask,
labels=y
)
loss = outputs[0]
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
self.optimizer.step()
```
Where the data are queries and documents that are either relevant (y=1) or irrelevant (y=0). Each input is the concatenation of a query and a document. ```get_global_attention()``` is a function to give global attention to query tokens.
I find that for some batches (no all batches!), the code would give the following errors, which are very confusing to me:
```
INFO:__main__:Namespace(apex_level='O2', batch_size=1, cased=1, debug=0, encoder_lr=1e-05, eval_step=1, finetune_embedding=0, local_rank=0, model_path='allenai/longformer-base-4096', model_type='longformer', num_epochs=20, num_ft_encoders=2, num_neg=1, projector_lr=1e-05, seed=611)
Some weights of the model checkpoint at allenai/longformer-base-4096 were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
- This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForSequenceClassification were not initialized from the model checkpoint at allenai/longformer-base-4096 and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:__main__:Reading data from /....../sampled
INFO:root:Number of positive query-document pairs in [train] set: 67
INFO:root:Number of labelled query-document pairs in [dev] set: 2000
INFO:root:Number of labelled query-document pairs in [test] set: 2000
INFO:__main__:Data reading done ...
INFO:__main__:adding 10-th encoder to optimizer...
INFO:__main__:adding 11-th encoder to optimizer...
Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
INFO:__main__:process[0]: training epoch 0 ...
/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/tokenization_utils.py:547: FutureWarning: `is_pretokenized` is deprecated and will be removed in a future version, use `is_split_into_words` instead.
warnings.warn(
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [10,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [2,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [3,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [11,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [7,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [9,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [6,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [8,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block:
...... (saving space)
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [0,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [1,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [4,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1591914886554/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [5,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "finetune-marco.py", line 93, in <module>
marco.run()
File "/mnt/nfs/work1/allan/user/LF-for-IR/Marco.py", line 167, in run
self.train()
File "/mnt/nfs/work1/allan/user/LF-for-IR/Marco.py", line 223, in train
outputs = self.model(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/apex/amp/_initialize.py", line 196, in new_fwd
output = old_fwd(*applier(args, input_caster),
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/apex/parallel/distributed.py", line 560, in forward
result = self.module(*inputs, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1442, in forward
outputs = self.longformer(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1262, in forward
encoder_outputs = self.encoder(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 903, in forward
layer_outputs = layer_module(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 849, in forward
self_attn_outputs = self.attention(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 793, in forward
self_outputs = self.self(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 246, in forward
is_global_attn = is_index_global_attn.flatten().any().item()
RuntimeError: CUDA error: device-side assert triggered
NCCL error in: /opt/conda/conda-bld/pytorch_1591914886554/work/torch/lib/c10d/../c10d/NCCLUtils.hpp:69, unhandled cuda error, NCCL version 2.4.8
Traceback (most recent call last):
File "/home/user/miniconda3/envs/marco/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/user/miniconda3/envs/marco/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/distributed/launch.py", line 258, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/home/user/miniconda3/envs/marco/bin/python', '-u', 'finetune-marco.py', '--local_rank=0', '--model_type', 'longformer', '--model_path', 'allenai/longformer-base-4096', '--batch_size', '1', '--finetune_embedding', '0', '--cased', '1', '--num_neg', '1', '--eval_step', '1', '--num_epochs', '20', '--apex_level', 'O2', '--encoder_lr', '1e-5', '--projector_lr', '1e-5', '--num_ft_encoders', '2', '--seed', '611']' died with <Signals.SIGABRT: 6>.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think this error is uncalled for. I have tried our pretrained models like base BERT models and they ran just fine. Can someone help interpret the error message here? Thanks! | 10-27-2020 02:28:14 | 10-27-2020 02:28:14 | Hi, have you tried running this code without using CUDA (i.e., on CPU)? The errors are usually more intelligible that way.
Could this be due to an OOM error and cuda not recovering from it?<|||||>> Hi, have you tried running this code without using CUDA (i.e., on CPU)? The errors are usually more intelligible that way.
>
> Could this be due to an OOM error and cuda not recovering from it?
Hi @LysandreJik , thanks for your response! I made some exploration following your suggestion: I move the job to CPU only and tried again. I think this time it has something to do with the position embeddings indexing. Error message:
```
Traceback (most recent call last):
File "finetune-marco.py", line 94, in <module>
marco.run()
File "/mnt/nfs/work1/user/user/LF-for-IR/Marco.py", line 177, in run
self.train()
File "/mnt/nfs/work1/user/user/LF-for-IR/Marco.py", line 258, in train
outputs = self.model(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1445, in forward
outputs = self.longformer(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 1261, in forward
embedding_output = self.embeddings(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 170, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 112, in forward
return F.embedding(
File "/home/user/miniconda3/envs/marco/lib/python3.8/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
So if I go to ```class LongformerEmbeddings(nn.Module)``` in ```modeling_longformer.py```and print the ```self.position_ids``` and ```position_ids``` inside the ```forward()``` function, I get:
```
tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83,
84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119,
120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131,
132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143,
144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,
156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167,
168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179,
180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203,
204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215,
216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227,
228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239,
240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251,
252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263,
264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287,
288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299,
300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311,
312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323,
324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335,
336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347,
348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359,
360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371,
372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383,
384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395,
396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407,
408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419,
420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431,
432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443,
444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455,
456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467,
468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479,
480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491,
492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503,
504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515,
516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527,
528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539,
540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551,
552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563,
564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575,
576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587,
588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599,
600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611,
612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623,
624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635,
636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647,
648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659,
660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671,
672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683,
684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695,
696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707,
708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719,
720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731,
732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743,
744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755,
756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767,
768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779,
780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791,
792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803,
804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815,
816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827,
828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839,
840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851,
852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863,
864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875,
876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887,
888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899,
900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911,
912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923,
924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935,
936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947,
948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959,
960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971,
972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983,
984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995,
996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007,
1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019,
1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031,
1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043,
1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055,
1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067,
1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079,
1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091,
1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103,
1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115,
1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127,
1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139,
1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151,
1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163,
1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175,
1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187,
1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199,
1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211,
1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223,
1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235,
1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247,
1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259,
1260, 1261, 1262, 1263, 1264, 1265, 1266, 1267, 1268, 1269, 1270, 1271,
1272, 1273, 1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283,
1284, 1285, 1286, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295,
1296, 1297, 1298, 1299, 1300, 1301, 1302, 1303, 1304, 1305, 1306, 1307,
1308, 1309, 1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317, 1318, 1319,
1320, 1321, 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331,
1332, 1333, 1334, 1335, 1336, 1337, 1338, 1339, 1340, 1341, 1342, 1343,
1344, 1345, 1346, 1347, 1348, 1349, 1350, 1351, 1352, 1353, 1354, 1355,
1356, 1357, 1358, 1359, 1360, 1361, 1362, 1363, 1364, 1365, 1366, 1367,
1368, 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379,
1380, 1381, 1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391,
1392, 1393, 1394, 1395, 1396, 1397, 1398, 1399, 1400, 1401, 1402, 1403,
1404, 1405, 1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 1414, 1415,
1416, 1417, 1418, 1419, 1420, 1421, 1422, 1423, 1424, 1425, 1426, 1427,
1428, 1429, 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1438, 1439,
1440, 1441, 1442, 1443, 1444, 1445, 1446, 1447, 1448, 1449, 1450, 1451,
1452, 1453, 1454, 1455, 1456, 1457, 1458, 1459, 1460, 1461, 1462, 1463,
1464, 1465, 1466, 1467, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475,
1476, 1477, 1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1487,
1488, 1489, 1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, 1498, 1499,
1500, 1501, 1502, 1503, 1504, 1505, 1506, 1507, 1508, 1509, 1510, 1511,
1512, 1513, 1514, 1515, 1516, 1517, 1518, 1519, 1520, 1521, 1522, 1523,
1524, 1525, 1526, 1527, 1528, 1529, 1530, 1531, 1532, 1533, 1534, 1535,
1536, 1537, 1538, 1539, 1540, 1541, 1542, 1543, 1544, 1545, 1546, 1547,
1548, 1549, 1550, 1551, 1552, 1553, 1554, 1555, 1556, 1557, 1558, 1559,
1560, 1561, 1562, 1563, 1564, 1565, 1566, 1567, 1568, 1569, 1570, 1571,
1572, 1573, 1574, 1575, 1576, 1577, 1578, 1579, 1580, 1581, 1582, 1583,
1584, 1585, 1586, 1587, 1588, 1589, 1590, 1591, 1592, 1593, 1594, 1595,
1596, 1597, 1598, 1599, 1600, 1601, 1602, 1603, 1604, 1605, 1606, 1607,
1608, 1609, 1610, 1611, 1612, 1613, 1614, 1615, 1616, 1617, 1618, 1619,
1620, 1621, 1622, 1623, 1624, 1625, 1626, 1627, 1628, 1629, 1630, 1631,
1632, 1633, 1634, 1635, 1636, 1637, 1638, 1639, 1640, 1641, 1642, 1643,
1644, 1645, 1646, 1647, 1648, 1649, 1650, 1651, 1652, 1653, 1654, 1655,
1656, 1657, 1658, 1659, 1660, 1661, 1662, 1663, 1664, 1665, 1666, 1667,
1668, 1669, 1670, 1671, 1672, 1673, 1674, 1675, 1676, 1677, 1678, 1679,
1680, 1681, 1682, 1683, 1684, 1685, 1686, 1687, 1688, 1689, 1690, 1691,
1692, 1693, 1694, 1695, 1696, 1697, 1698, 1699, 1700, 1701, 1702, 1703,
1704, 1705, 1706, 1707, 1708, 1709, 1710, 1711, 1712, 1713, 1714, 1715,
1716, 1717, 1718, 1719, 1720, 1721, 1722, 1723, 1724, 1725, 1726, 1727,
1728, 1729, 1730, 1731, 1732, 1733, 1734, 1735, 1736, 1737, 1738, 1739,
1740, 1741, 1742, 1743, 1744, 1745, 1746, 1747, 1748, 1749, 1750, 1751,
1752, 1753, 1754, 1755, 1756, 1757, 1758, 1759, 1760, 1761, 1762, 1763,
1764, 1765, 1766, 1767, 1768, 1769, 1770, 1771, 1772, 1773, 1774, 1775,
1776, 1777, 1778, 1779, 1780, 1781, 1782, 1783, 1784, 1785, 1786, 1787,
1788, 1789, 1790, 1791, 1792, 1793, 1794, 1795, 1796, 1797, 1798, 1799,
1800, 1801, 1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811,
1812, 1813, 1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823,
1824, 1825, 1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835,
1836, 1837, 1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847,
1848, 1849, 1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859,
1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871,
1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883,
1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895,
1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907,
1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919,
1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931,
1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943,
1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955,
1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967,
1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979,
1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991,
1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003,
2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015,
2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027,
2028, 2029, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039,
2040, 2041, 2042, 2043, 2044, 2045, 2046, 2047, 2048, 2049, 2050, 2051,
2052, 2053, 2054, 2055, 2056, 2057, 2058, 2059, 2060, 2061, 2062, 2063,
2064, 2065, 2066, 2067, 2068, 2069, 2070, 2071, 2072, 2073, 2074, 2075,
2076, 2077, 2078, 2079, 2080, 2081, 2082, 2083, 2084, 2085, 2086, 2087,
2088, 2089, 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099,
2100, 2101, 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111,
2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119, 2120, 2121, 2122, 2123,
2124, 2125, 2126, 2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135,
2136, 2137, 2138, 2139, 2140, 2141, 2142, 2143, 2144, 2145, 2146, 2147,
2148, 2149, 2150, 2151, 2152, 2153, 2154, 2155, 2156, 2157, 2158, 2159,
2160, 2161, 2162, 2163, 2164, 2165, 2166, 2167, 2168, 2169, 2170, 2171,
2172, 2173, 2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181, 2182, 2183,
2184, 2185, 2186, 2187, 2188, 2189, 2190, 2191, 2192, 2193, 2194, 2195,
2196, 2197, 2198, 2199, 2200, 2201, 2202, 2203, 2204, 2205, 2206, 2207,
2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216, 2217, 2218, 2219,
2220, 2221, 2222, 2223, 2224, 2225, 2226, 2227, 2228, 2229, 2230, 2231,
2232, 2233, 2234, 2235, 2236, 2237, 2238, 2239, 2240, 2241, 2242, 2243,
2244, 2245, 2246, 2247, 2248, 2249, 2250, 2251, 2252, 2253, 2254, 2255,
2256, 2257, 2258, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266, 2267,
2268, 2269, 2270, 2271, 2272, 2273, 2274, 2275, 2276, 2277, 2278, 2279,
2280, 2281, 2282, 2283, 2284, 2285, 2286, 2287, 2288, 2289, 2290, 2291,
2292, 2293, 2294, 2295, 2296, 2297, 2298, 2299, 2300, 2301, 2302, 2303,
2304, 2305, 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315,
2316, 2317, 2318, 2319, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327,
2328, 2329, 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2338, 2339,
2340, 2341, 2342, 2343, 2344, 2345, 2346, 2347, 2348, 2349, 2350, 2351,
2352, 2353, 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363,
2364, 2365, 2366, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2375,
2376, 2377, 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387,
2388, 2389, 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2399,
2400, 2401, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411,
2412, 2413, 2414, 2415, 2416, 2417, 2418, 2419, 2420, 2421, 2422, 2423,
2424, 2425, 2426, 2427, 2428, 2429, 2430, 2431, 2432, 2433, 2434, 2435,
2436, 2437, 2438, 2439, 2440, 2441, 2442, 2443, 2444, 2445, 2446, 2447,
2448, 2449, 2450, 2451, 2452, 2453, 2454, 2455, 2456, 2457, 2458, 2459,
2460, 2461, 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2469, 2470, 2471,
2472, 2473, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483,
2484, 2485, 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2495,
2496, 2497, 2498, 2499, 2500, 2501, 2502, 2503, 2504, 2505, 2506, 2507,
2508, 2509, 2510, 2511, 2512, 2513, 2514, 2515, 2516, 2517, 2518, 2519,
2520, 2521, 2522, 2523, 2524, 2525, 2526, 2527, 2528, 2529, 2530, 2531,
2532, 2533, 2534, 2535, 2536, 2537, 2538, 2539, 2540, 2541, 2542, 2543,
2544, 2545, 2546, 2547, 2548, 2549, 2550, 2551, 2552, 2553, 2554, 2555,
2556, 2557, 2558, 2559, 2560, 2561, 2562, 2563, 2564, 2565, 2566, 2567,
2568, 2569, 2570, 2571, 2572, 2573, 2574, 2575, 2576, 2577, 2578, 2579,
2580, 2581, 2582, 2583, 2584, 2585, 2586, 2587, 2588, 2589, 2590, 2591,
2592, 2593, 2594, 2595, 2596, 2597, 2598, 2599, 2600, 2601, 2602, 2603,
2604, 2605, 2606, 2607, 2608, 2609, 2610, 2611, 2612, 2613, 2614, 2615,
2616, 2617, 2618, 2619, 2620, 2621, 2622, 2623, 2624, 2625, 2626, 2627,
2628, 2629, 2630, 2631, 2632, 2633, 2634, 2635, 2636, 2637, 2638, 2639,
2640, 2641, 2642, 2643, 2644, 2645, 2646, 2647, 2648, 2649, 2650, 2651,
2652, 2653, 2654, 2655, 2656, 2657, 2658, 2659, 2660, 2661, 2662, 2663,
2664, 2665, 2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2674, 2675,
2676, 2677, 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687,
2688, 2689, 2690, 2691, 2692, 2693, 2694, 2695, 2696, 2697, 2698, 2699,
2700, 2701, 2702, 2703, 2704, 2705, 2706, 2707, 2708, 2709, 2710, 2711,
2712, 2713, 2714, 2715, 2716, 2717, 2718, 2719, 2720, 2721, 2722, 2723,
2724, 2725, 2726, 2727, 2728, 2729, 2730, 2731, 2732, 2733, 2734, 2735,
2736, 2737, 2738, 2739, 2740, 2741, 2742, 2743, 2744, 2745, 2746, 2747,
2748, 2749, 2750, 2751, 2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759,
2760, 2761, 2762, 2763, 2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771,
2772, 2773, 2774, 2775, 2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783,
2784, 2785, 2786, 2787, 2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795,
2796, 2797, 2798, 2799, 2800, 2801, 2802, 2803, 2804, 2805, 2806, 2807,
2808, 2809, 2810, 2811, 2812, 2813, 2814, 2815, 2816, 2817, 2818, 2819,
2820, 2821, 2822, 2823, 2824, 2825, 2826, 2827, 2828, 2829, 2830, 2831,
2832, 2833, 2834, 2835, 2836, 2837, 2838, 2839, 2840, 2841, 2842, 2843,
2844, 2845, 2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855,
2856, 2857, 2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867,
2868, 2869, 2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879,
2880, 2881, 2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891,
2892, 2893, 2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903,
2904, 2905, 2906, 2907, 2908, 2909, 2910, 2911, 2912, 2913, 2914, 2915,
2916, 2917, 2918, 2919, 2920, 2921, 2922, 2923, 2924, 2925, 2926, 2927,
2928, 2929, 2930, 2931, 2932, 2933, 2934, 2935, 2936, 2937, 2938, 2939,
2940, 2941, 2942, 2943, 2944, 2945, 2946, 2947, 2948, 2949, 2950, 2951,
2952, 2953, 2954, 2955, 2956, 2957, 2958, 2959, 2960, 2961, 2962, 2963,
2964, 2965, 2966, 2967, 2968, 2969, 2970, 2971, 2972, 2973, 2974, 2975,
2976, 2977, 2978, 2979, 2980, 2981, 2982, 2983, 2984, 2985, 2986, 2987,
2988, 2989, 2990, 2991, 2992, 2993, 2994, 2995, 2996, 2997, 2998, 2999,
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010, 3011,
3012, 3013, 3014, 3015, 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023,
3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035,
3036, 3037, 3038, 3039, 3040, 3041, 3042, 3043, 3044, 3045, 3046, 3047,
3048, 3049, 3050, 3051, 3052, 3053, 3054, 3055, 3056, 3057, 3058, 3059,
3060, 3061, 3062, 3063, 3064, 3065, 3066, 3067, 3068, 3069, 3070, 3071,
3072, 3073, 3074, 3075, 3076, 3077, 3078, 3079, 3080, 3081, 3082, 3083,
3084, 3085, 3086, 3087, 3088, 3089, 3090, 3091, 3092, 3093, 3094, 3095,
3096, 3097, 3098, 3099, 3100, 3101, 3102, 3103, 3104, 3105, 3106, 3107,
3108, 3109, 3110, 3111, 3112, 3113, 3114, 3115, 3116, 3117, 3118, 3119,
3120, 3121, 3122, 3123, 3124, 3125, 3126, 3127, 3128, 3129, 3130, 3131,
3132, 3133, 3134, 3135, 3136, 3137, 3138, 3139, 3140, 3141, 3142, 3143,
3144, 3145, 3146, 3147, 3148, 3149, 3150, 3151, 3152, 3153, 3154, 3155,
3156, 3157, 3158, 3159, 3160, 3161, 3162, 3163, 3164, 3165, 3166, 3167,
3168, 3169, 3170, 3171, 3172, 3173, 3174, 3175, 3176, 3177, 3178, 3179,
3180, 3181, 3182, 3183, 3184, 3185, 3186, 3187, 3188, 3189, 3190, 3191,
3192, 3193, 3194, 3195, 3196, 3197, 3198, 3199, 3200, 3201, 3202, 3203,
3204, 3205, 3206, 3207, 3208, 3209, 3210, 3211, 3212, 3213, 3214, 3215,
3216, 3217, 3218, 3219, 3220, 3221, 3222, 3223, 3224, 3225, 3226, 3227,
3228, 3229, 3230, 3231, 3232, 3233, 3234, 3235, 3236, 3237, 3238, 3239,
3240, 3241, 3242, 3243, 3244, 3245, 3246, 3247, 3248, 3249, 3250, 3251,
3252, 3253, 3254, 3255, 3256, 3257, 3258, 3259, 3260, 3261, 3262, 3263,
3264, 3265, 3266, 3267, 3268, 3269, 3270, 3271, 3272, 3273, 3274, 3275,
3276, 3277, 3278, 3279, 3280, 3281, 3282, 3283, 3284, 3285, 3286, 3287,
3288, 3289, 3290, 3291, 3292, 3293, 3294, 3295, 3296, 3297, 3298, 3299,
3300, 3301, 3302, 3303, 3304, 3305, 3306, 3307, 3308, 3309, 3310, 3311,
3312, 3313, 3314, 3315, 3316, 3317, 3318, 3319, 3320, 3321, 3322, 3323,
3324, 3325, 3326, 3327, 3328, 3329, 3330, 3331, 3332, 3333, 3334, 3335,
3336, 3337, 3338, 3339, 3340, 3341, 3342, 3343, 3344, 3345, 3346, 3347,
3348, 3349, 3350, 3351, 3352, 3353, 3354, 3355, 3356, 3357, 3358, 3359,
3360, 3361, 3362, 3363, 3364, 3365, 3366, 3367, 3368, 3369, 3370, 3371,
3372, 3373, 3374, 3375, 3376, 3377, 3378, 3379, 3380, 3381, 3382, 3383,
3384, 3385, 3386, 3387, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395,
3396, 3397, 3398, 3399, 3400, 3401, 3402, 3403, 3404, 3405, 3406, 3407,
3408, 3409, 3410, 3411, 3412, 3413, 3414, 3415, 3416, 3417, 3418, 3419,
3420, 3421, 3422, 3423, 3424, 3425, 3426, 3427, 3428, 3429, 3430, 3431,
3432, 3433, 3434, 3435, 3436, 3437, 3438, 3439, 3440, 3441, 3442, 3443,
3444, 3445, 3446, 3447, 3448, 3449, 3450, 3451, 3452, 3453, 3454, 3455,
3456, 3457, 3458, 3459, 3460, 3461, 3462, 3463, 3464, 3465, 3466, 3467,
3468, 3469, 3470, 3471, 3472, 3473, 3474, 3475, 3476, 3477, 3478, 3479,
3480, 3481, 3482, 3483, 3484, 3485, 3486, 3487, 3488, 3489, 3490, 3491,
3492, 3493, 3494, 3495, 3496, 3497, 3498, 3499, 3500, 3501, 3502, 3503,
3504, 3505, 3506, 3507, 3508, 3509, 3510, 3511, 3512, 3513, 3514, 3515,
3516, 3517, 3518, 3519, 3520, 3521, 3522, 3523, 3524, 3525, 3526, 3527,
3528, 3529, 3530, 3531, 3532, 3533, 3534, 3535, 3536, 3537, 3538, 3539,
3540, 3541, 3542, 3543, 3544, 3545, 3546, 3547, 3548, 3549, 3550, 3551,
3552, 3553, 3554, 3555, 3556, 3557, 3558, 3559, 3560, 3561, 3562, 3563,
3564, 3565, 3566, 3567, 3568, 3569, 3570, 3571, 3572, 3573, 3574, 3575,
3576, 3577, 3578, 3579, 3580, 3581, 3582, 3583, 3584, 3585, 3586, 3587,
3588, 3589, 3590, 3591, 3592, 3593, 3594, 3595, 3596, 3597, 3598, 3599,
3600, 3601, 3602, 3603, 3604, 3605, 3606, 3607, 3608, 3609, 3610, 3611,
3612, 3613, 3614, 3615, 3616, 3617, 3618, 3619, 3620, 3621, 3622, 3623,
3624, 3625, 3626, 3627, 3628, 3629, 3630, 3631, 3632, 3633, 3634, 3635,
3636, 3637, 3638, 3639, 3640, 3641, 3642, 3643, 3644, 3645, 3646, 3647,
3648, 3649, 3650, 3651, 3652, 3653, 3654, 3655, 3656, 3657, 3658, 3659,
3660, 3661, 3662, 3663, 3664, 3665, 3666, 3667, 3668, 3669, 3670, 3671,
3672, 3673, 3674, 3675, 3676, 3677, 3678, 3679, 3680, 3681, 3682, 3683,
3684, 3685, 3686, 3687, 3688, 3689, 3690, 3691, 3692, 3693, 3694, 3695,
3696, 3697, 3698, 3699, 3700, 3701, 3702, 3703, 3704, 3705, 3706, 3707,
3708, 3709, 3710, 3711, 3712, 3713, 3714, 3715, 3716, 3717, 3718, 3719,
3720, 3721, 3722, 3723, 3724, 3725, 3726, 3727, 3728, 3729, 3730, 3731,
3732, 3733, 3734, 3735, 3736, 3737, 3738, 3739, 3740, 3741, 3742, 3743,
3744, 3745, 3746, 3747, 3748, 3749, 3750, 3751, 3752, 3753, 3754, 3755,
3756, 3757, 3758, 3759, 3760, 3761, 3762, 3763, 3764, 3765, 3766, 3767,
3768, 3769, 3770, 3771, 3772, 3773, 3774, 3775, 3776, 3777, 3778, 3779,
3780, 3781, 3782, 3783, 3784, 3785, 3786, 3787, 3788, 3789, 3790, 3791,
3792, 3793, 3794, 3795, 3796, 3797, 3798, 3799, 3800, 3801, 3802, 3803,
3804, 3805, 3806, 3807, 3808, 3809, 3810, 3811, 3812, 3813, 3814, 3815,
3816, 3817, 3818, 3819, 3820, 3821, 3822, 3823, 3824, 3825, 3826, 3827,
3828, 3829, 3830, 3831, 3832, 3833, 3834, 3835, 3836, 3837, 3838, 3839,
3840, 3841, 3842, 3843, 3844, 3845, 3846, 3847, 3848, 3849, 3850, 3851,
3852, 3853, 3854, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863,
3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875,
3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884, 3885, 3886, 3887,
3888, 3889, 3890, 3891, 3892, 3893, 3894, 3895, 3896, 3897, 3898, 3899,
3900, 3901, 3902, 3903, 3904, 3905, 3906, 3907, 3908, 3909, 3910, 3911,
3912, 3913, 3914, 3915, 3916, 3917, 3918, 3919, 3920, 3921, 3922, 3923,
3924, 3925, 3926, 3927, 3928, 3929, 3930, 3931, 3932, 3933, 3934, 3935,
3936, 3937, 3938, 3939, 3940, 3941, 3942, 3943, 3944, 3945, 3946, 3947,
3948, 3949, 3950, 3951, 3952, 3953, 3954, 3955, 3956, 3957, 3958, 3959,
3960, 3961, 3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971,
3972, 3973, 3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983,
3984, 3985, 3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995,
3996, 3997, 3998, 3999, 4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007,
4008, 4009, 4010, 4011, 4012, 4013, 4014, 4015, 4016, 4017, 4018, 4019,
4020, 4021, 4022, 4023, 4024, 4025, 4026, 4027, 4028, 4029, 4030, 4031,
4032, 4033, 4034, 4035, 4036, 4037, 4038, 4039, 4040, 4041, 4042, 4043,
4044, 4045, 4046, 4047, 4048, 4049, 4050, 4051, 4052, 4053, 4054, 4055,
4056, 4057, 4058, 4059, 4060, 4061, 4062, 4063, 4064, 4065, 4066, 4067,
4068, 4069, 4070, 4071, 4072, 4073, 4074, 4075, 4076, 4077, 4078, 4079,
4080, 4081, 4082, 4083, 4084, 4085, 4086, 4087, 4088, 4089, 4090, 4091,
4092, 4093, 4094, 4095, 4096, 4097]]) torch.Size([1, 4098])
tensor([[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,
86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,
98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,
110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,
134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145,
146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157,
158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169,
170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217,
218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229,
230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241,
242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253,
254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265,
266, 267, 268, 269, 270, 271, 272, 273, 274, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,
86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,
98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,
110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,
134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145,
146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157,
158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169,
170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217,
218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229,
230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241,
242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253,
254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265,
266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277,
278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301,
302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313,
314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325,
326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337,
338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349,
350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361,
362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373,
374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385,
386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397,
398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409,
410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421,
422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433,
434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445,
446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457,
458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469,
470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481,
482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493,
494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505,
506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517,
518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529,
530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541,
542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553,
554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565,
566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577,
578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589,
590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601,
602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613,
614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625,
626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637,
638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649,
650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661,
662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673,
674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685,
686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697,
698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709,
710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721,
722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733,
734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745,
746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757,
758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769,
770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781,
782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793,
794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805,
806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817,
818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829,
830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841,
842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853,
854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865,
866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877,
878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889,
890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901,
902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913,
914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925,
926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937,
938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949,
950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961,
962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973,
974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985,
986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997,
998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009,
1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021,
1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033,
1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045,
1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057,
1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069,
1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081,
1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093,
1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105,
1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117,
1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129,
1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141,
1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153,
1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165,
1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177,
1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189,
1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201,
1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213,
1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225,
1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237,
1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249,
1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260, 1261,
1262, 1263, 1264, 1265, 1266, 1267, 1268, 1269, 1270, 1271, 1272, 1273,
1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283, 1284, 1285,
1286, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295, 1296, 1297,
1298, 1299, 1300, 1301, 1302, 1303, 1304, 1305, 1306, 1307, 1308, 1309,
1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321,
1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331, 1332, 1333,
1334, 1335, 1336, 1337, 1338, 1339, 1340, 1341, 1342, 1343, 1344, 1345,
1346, 1347, 1348, 1349, 1350, 1351, 1352, 1353, 1354, 1355, 1356, 1357,
1358, 1359, 1360, 1361, 1362, 1363, 1364, 1365, 1366, 1367, 1368, 1369,
1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1380, 1381,
1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391, 1392, 1393,
1394, 1395, 1396, 1397, 1398, 1399, 1400, 1401, 1402, 1403, 1404, 1405,
1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 1414, 1415, 1416, 1417,
1418, 1419, 1420, 1421, 1422, 1423, 1424, 1425, 1426, 1427, 1428, 1429,
1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1438, 1439, 1440, 1441,
1442, 1443, 1444, 1445, 1446, 1447, 1448, 1449, 1450, 1451, 1452, 1453,
1454, 1455, 1456, 1457, 1458, 1459, 1460, 1461, 1462, 1463, 1464, 1465,
1466, 1467, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475, 1476, 1477,
1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1487, 1488, 1489,
1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, 1498, 1499, 1500, 1501,
1502, 1503, 1504, 1505, 1506, 1507, 1508, 1509, 1510, 1511, 1512, 1513,
1514, 1515, 1516, 1517, 1518, 1519, 1520, 1521, 1522, 1523, 1524, 1525,
1526, 1527, 1528, 1529, 1530, 1531, 1532, 1533, 1534, 1535, 1536, 1537,
1538, 1539, 1540, 1541, 1542, 1543, 1544, 1545, 1546, 1547, 1548, 1549,
1550, 1551, 1552, 1553, 1554, 1555, 1556, 1557, 1558, 1559, 1560, 1561,
1562, 1563, 1564, 1565, 1566, 1567, 1568, 1569, 1570, 1571, 1572, 1573,
1574, 1575, 1576, 1577, 1578, 1579, 1580, 1581, 1582, 1583, 1584, 1585,
1586, 1587, 1588, 1589, 1590, 1591, 1592, 1593, 1594, 1595, 1596, 1597,
1598, 1599, 1600, 1601, 1602, 1603, 1604, 1605, 1606, 1607, 1608, 1609,
1610, 1611, 1612, 1613, 1614, 1615, 1616, 1617, 1618, 1619, 1620, 1621,
1622, 1623, 1624, 1625, 1626, 1627, 1628, 1629, 1630, 1631, 1632, 1633,
1634, 1635, 1636, 1637, 1638, 1639, 1640, 1641, 1642, 1643, 1644, 1645,
1646, 1647, 1648, 1649, 1650, 1651, 1652, 1653, 1654, 1655, 1656, 1657,
1658, 1659, 1660, 1661, 1662, 1663, 1664, 1665, 1666, 1667, 1668, 1669,
1670, 1671, 1672, 1673, 1674, 1675, 1676, 1677, 1678, 1679, 1680, 1681,
1682, 1683, 1684, 1685, 1686, 1687, 1688, 1689, 1690, 1691, 1692, 1693,
1694, 1695, 1696, 1697, 1698, 1699, 1700, 1701, 1702, 1703, 1704, 1705,
1706, 1707, 1708, 1709, 1710, 1711, 1712, 1713, 1714, 1715, 1716, 1717,
1718, 1719, 1720, 1721, 1722, 1723, 1724, 1725, 1726, 1727, 1728, 1729,
1730, 1731, 1732, 1733, 1734, 1735, 1736, 1737, 1738, 1739, 1740, 1741,
1742, 1743, 1744, 1745, 1746, 1747, 1748, 1749, 1750, 1751, 1752, 1753,
1754, 1755, 1756, 1757, 1758, 1759, 1760, 1761, 1762, 1763, 1764, 1765,
1766, 1767, 1768, 1769, 1770, 1771, 1772, 1773, 1774, 1775, 1776, 1777,
1778, 1779, 1780, 1781, 1782, 1783, 1784, 1785, 1786, 1787, 1788, 1789,
1790, 1791, 1792, 1793, 1794, 1795, 1796, 1797, 1798, 1799, 1800, 1801,
1802, 1803, 1804, 1805, 1806, 1807, 1808, 1809, 1810, 1811, 1812, 1813,
1814, 1815, 1816, 1817, 1818, 1819, 1820, 1821, 1822, 1823, 1824, 1825,
1826, 1827, 1828, 1829, 1830, 1831, 1832, 1833, 1834, 1835, 1836, 1837,
1838, 1839, 1840, 1841, 1842, 1843, 1844, 1845, 1846, 1847, 1848, 1849,
1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861,
1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873,
1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885,
1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897,
1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909,
1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921,
1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933,
1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945,
1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957,
1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969,
1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981,
1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993,
1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017,
2018, 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029,
2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2040, 2041,
2042, 2043, 2044, 2045, 2046, 2047, 2048, 2049, 2050, 2051, 2052, 2053,
2054, 2055, 2056, 2057, 2058, 2059, 2060, 2061, 2062, 2063, 2064, 2065,
2066, 2067, 2068, 2069, 2070, 2071, 2072, 2073, 2074, 2075, 2076, 2077,
2078, 2079, 2080, 2081, 2082, 2083, 2084, 2085, 2086, 2087, 2088, 2089,
2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099, 2100, 2101,
2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113,
2114, 2115, 2116, 2117, 2118, 2119, 2120, 2121, 2122, 2123, 2124, 2125,
2126, 2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135, 2136, 2137,
2138, 2139, 2140, 2141, 2142, 2143, 2144, 2145, 2146, 2147, 2148, 2149,
2150, 2151, 2152, 2153, 2154, 2155, 2156, 2157, 2158, 2159, 2160, 2161,
2162, 2163, 2164, 2165, 2166, 2167, 2168, 2169, 2170, 2171, 2172, 2173,
2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181, 2182, 2183, 2184, 2185,
2186, 2187, 2188, 2189, 2190, 2191, 2192, 2193, 2194, 2195, 2196, 2197,
2198, 2199, 2200, 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209,
2210, 2211, 2212, 2213, 2214, 2215, 2216, 2217, 2218, 2219, 2220, 2221,
2222, 2223, 2224, 2225, 2226, 2227, 2228, 2229, 2230, 2231, 2232, 2233,
2234, 2235, 2236, 2237, 2238, 2239, 2240, 2241, 2242, 2243, 2244, 2245,
2246, 2247, 2248, 2249, 2250, 2251, 2252, 2253, 2254, 2255, 2256, 2257,
2258, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266, 2267, 2268, 2269,
2270, 2271, 2272, 2273, 2274, 2275, 2276, 2277, 2278, 2279, 2280, 2281,
2282, 2283, 2284, 2285, 2286, 2287, 2288, 2289, 2290, 2291, 2292, 2293,
2294, 2295, 2296, 2297, 2298, 2299, 2300, 2301, 2302, 2303, 2304, 2305,
2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315, 2316, 2317,
2318, 2319, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327, 2328, 2329,
2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2338, 2339, 2340, 2341,
2342, 2343, 2344, 2345, 2346, 2347, 2348, 2349, 2350, 2351, 2352, 2353,
2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363, 2364, 2365,
2366, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2375, 2376, 2377,
2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387, 2388, 2389,
2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2399, 2400, 2401,
2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 2412, 2413,
2414, 2415, 2416, 2417, 2418, 2419, 2420, 2421, 2422, 2423, 2424, 2425,
2426, 2427, 2428, 2429, 2430, 2431, 2432, 2433, 2434, 2435, 2436, 2437,
2438, 2439, 2440, 2441, 2442, 2443, 2444, 2445, 2446, 2447, 2448, 2449,
2450, 2451, 2452, 2453, 2454, 2455, 2456, 2457, 2458, 2459, 2460, 2461,
2462, 2463, 2464, 2465, 2466, 2467, 2468, 2469, 2470, 2471, 2472, 2473,
2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483, 2484, 2485,
2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2495, 2496, 2497,
2498, 2499, 2500, 2501, 2502, 2503, 2504, 2505, 2506, 2507, 2508, 2509,
2510, 2511, 2512, 2513, 2514, 2515, 2516, 2517, 2518, 2519, 2520, 2521,
2522, 2523, 2524, 2525, 2526, 2527, 2528, 2529, 2530, 2531, 2532, 2533,
2534, 2535, 2536, 2537, 2538, 2539, 2540, 2541, 2542, 2543, 2544, 2545,
2546, 2547, 2548, 2549, 2550, 2551, 2552, 2553, 2554, 2555, 2556, 2557,
2558, 2559, 2560, 2561, 2562, 2563, 2564, 2565, 2566, 2567, 2568, 2569,
2570, 2571, 2572, 2573, 2574, 2575, 2576, 2577, 2578, 2579, 2580, 2581,
2582, 2583, 2584, 2585, 2586, 2587, 2588, 2589, 2590, 2591, 2592, 2593,
2594, 2595, 2596, 2597, 2598, 2599, 2600, 2601, 2602, 2603, 2604, 2605,
2606, 2607, 2608, 2609, 2610, 2611, 2612, 2613, 2614, 2615, 2616, 2617,
2618, 2619, 2620, 2621, 2622, 2623, 2624, 2625, 2626, 2627, 2628, 2629,
2630, 2631, 2632, 2633, 2634, 2635, 2636, 2637, 2638, 2639, 2640, 2641,
2642, 2643, 2644, 2645, 2646, 2647, 2648, 2649, 2650, 2651, 2652, 2653,
2654, 2655, 2656, 2657, 2658, 2659, 2660, 2661, 2662, 2663, 2664, 2665,
2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2674, 2675, 2676, 2677,
2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, 2689,
2690, 2691, 2692, 2693, 2694, 2695, 2696, 2697, 2698, 2699, 2700, 2701,
2702, 2703, 2704, 2705, 2706, 2707, 2708, 2709, 2710, 2711, 2712, 2713,
2714, 2715, 2716, 2717, 2718, 2719, 2720, 2721, 2722, 2723, 2724, 2725,
2726, 2727, 2728, 2729, 2730, 2731, 2732, 2733, 2734, 2735, 2736, 2737,
2738, 2739, 2740, 2741, 2742, 2743, 2744, 2745, 2746, 2747, 2748, 2749,
2750, 2751, 2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759, 2760, 2761,
2762, 2763, 2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771, 2772, 2773,
2774, 2775, 2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783, 2784, 2785,
2786, 2787, 2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795, 2796, 2797,
2798, 2799, 2800, 2801, 2802, 2803, 2804, 2805, 2806, 2807, 2808, 2809,
2810, 2811, 2812, 2813, 2814, 2815, 2816, 2817, 2818, 2819, 2820, 2821,
2822, 2823, 2824, 2825, 2826, 2827, 2828, 2829, 2830, 2831, 2832, 2833,
2834, 2835, 2836, 2837, 2838, 2839, 2840, 2841, 2842, 2843, 2844, 2845,
2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855, 2856, 2857,
2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867, 2868, 2869,
2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879, 2880, 2881,
2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891, 2892, 2893,
2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903, 2904, 2905,
2906, 2907, 2908, 2909, 2910, 2911, 2912, 2913, 2914, 2915, 2916, 2917,
2918, 2919, 2920, 2921, 2922, 2923, 2924, 2925, 2926, 2927, 2928, 2929,
2930, 2931, 2932, 2933, 2934, 2935, 2936, 2937, 2938, 2939, 2940, 2941,
2942, 2943, 2944, 2945, 2946, 2947, 2948, 2949, 2950, 2951, 2952, 2953,
2954, 2955, 2956, 2957, 2958, 2959, 2960, 2961, 2962, 2963, 2964, 2965,
2966, 2967, 2968, 2969, 2970, 2971, 2972, 2973, 2974, 2975, 2976, 2977,
2978, 2979, 2980, 2981, 2982, 2983, 2984, 2985, 2986, 2987, 2988, 2989,
2990, 2991, 2992, 2993, 2994, 2995, 2996, 2997, 2998, 2999, 3000, 3001,
3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010, 3011, 3012, 3013,
3014, 3015, 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, 3024, 3025,
3026, 3027, 3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037,
3038, 3039, 3040, 3041, 3042, 3043, 3044, 3045, 3046, 3047, 3048, 3049,
3050, 3051, 3052, 3053, 3054, 3055, 3056, 3057, 3058, 3059, 3060, 3061,
3062, 3063, 3064, 3065, 3066, 3067, 3068, 3069, 3070, 3071, 3072, 3073,
3074, 3075, 3076, 3077, 3078, 3079, 3080, 3081, 3082, 3083, 3084, 3085,
3086, 3087, 3088, 3089, 3090, 3091, 3092, 3093, 3094, 3095, 3096, 3097,
3098, 3099, 3100, 3101, 3102, 3103, 3104, 3105, 3106, 3107, 3108, 3109,
3110, 3111, 3112, 3113, 3114, 3115, 3116, 3117, 3118, 3119, 3120, 3121,
3122, 3123, 3124, 3125, 3126, 3127, 3128, 3129, 3130, 3131, 3132, 3133,
3134, 3135, 3136, 3137, 3138, 3139, 3140, 3141, 3142, 3143, 3144, 3145,
3146, 3147, 3148, 3149, 3150, 3151, 3152, 3153, 3154, 3155, 3156, 3157,
3158, 3159, 3160, 3161, 3162, 3163, 3164, 3165, 3166, 3167, 3168, 3169,
3170, 3171, 3172, 3173, 3174, 3175, 3176, 3177, 3178, 3179, 3180, 3181,
3182, 3183, 3184, 3185, 3186, 3187, 3188, 3189, 3190, 3191, 3192, 3193,
3194, 3195, 3196, 3197, 3198, 3199, 3200, 3201, 3202, 3203, 3204, 3205,
3206, 3207, 3208, 3209, 3210, 3211, 3212, 3213, 3214, 3215, 3216, 3217,
3218, 3219, 3220, 3221, 3222, 3223, 3224, 3225, 3226, 3227, 3228, 3229,
3230, 3231, 3232, 3233, 3234, 3235, 3236, 3237, 3238, 3239, 3240, 3241,
3242, 3243, 3244, 3245, 3246, 3247, 3248, 3249, 3250, 3251, 3252, 3253,
3254, 3255, 3256, 3257, 3258, 3259, 3260, 3261, 3262, 3263, 3264, 3265,
3266, 3267, 3268, 3269, 3270, 3271, 3272, 3273, 3274, 3275, 3276, 3277,
3278, 3279, 3280, 3281, 3282, 3283, 3284, 3285, 3286, 3287, 3288, 3289,
3290, 3291, 3292, 3293, 3294, 3295, 3296, 3297, 3298, 3299, 3300, 3301,
3302, 3303, 3304, 3305, 3306, 3307, 3308, 3309, 3310, 3311, 3312, 3313,
3314, 3315, 3316, 3317, 3318, 3319, 3320, 3321, 3322, 3323, 3324, 3325,
3326, 3327, 3328, 3329, 3330, 3331, 3332, 3333, 3334, 3335, 3336, 3337,
3338, 3339, 3340, 3341, 3342, 3343, 3344, 3345, 3346, 3347, 3348, 3349,
3350, 3351, 3352, 3353, 3354, 3355, 3356, 3357, 3358, 3359, 3360, 3361,
3362, 3363, 3364, 3365, 3366, 3367, 3368, 3369, 3370, 3371, 3372, 3373,
3374, 3375, 3376, 3377, 3378, 3379, 3380, 3381, 3382, 3383, 3384, 3385,
3386, 3387, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395, 3396, 3397,
3398, 3399, 3400, 3401, 3402, 3403, 3404, 3405, 3406, 3407, 3408, 3409,
3410, 3411, 3412, 3413, 3414, 3415, 3416, 3417, 3418, 3419, 3420, 3421,
3422, 3423, 3424, 3425, 3426, 3427, 3428, 3429, 3430, 3431, 3432, 3433,
3434, 3435, 3436, 3437, 3438, 3439, 3440, 3441, 3442, 3443, 3444, 3445,
3446, 3447, 3448, 3449, 3450, 3451, 3452, 3453, 3454, 3455, 3456, 3457,
3458, 3459, 3460, 3461, 3462, 3463, 3464, 3465, 3466, 3467, 3468, 3469,
3470, 3471, 3472, 3473, 3474, 3475, 3476, 3477, 3478, 3479, 3480, 3481,
3482, 3483, 3484, 3485, 3486, 3487, 3488, 3489, 3490, 3491, 3492, 3493,
3494, 3495, 3496, 3497, 3498, 3499, 3500, 3501, 3502, 3503, 3504, 3505,
3506, 3507, 3508, 3509, 3510, 3511, 3512, 3513, 3514, 3515, 3516, 3517,
3518, 3519, 3520, 3521, 3522, 3523, 3524, 3525, 3526, 3527, 3528, 3529,
3530, 3531, 3532, 3533, 3534, 3535, 3536, 3537, 3538, 3539, 3540, 3541,
3542, 3543, 3544, 3545, 3546, 3547, 3548, 3549, 3550, 3551, 3552, 3553,
3554, 3555, 3556, 3557, 3558, 3559, 3560, 3561, 3562, 3563, 3564, 3565,
3566, 3567, 3568, 3569, 3570, 3571, 3572, 3573, 3574, 3575, 3576, 3577,
3578, 3579, 3580, 3581, 3582, 3583, 3584, 3585, 3586, 3587, 3588, 3589,
3590, 3591, 3592, 3593, 3594, 3595, 3596, 3597, 3598, 3599, 3600, 3601,
3602, 3603, 3604, 3605, 3606, 3607, 3608, 3609, 3610, 3611, 3612, 3613,
3614, 3615, 3616, 3617, 3618, 3619, 3620, 3621, 3622, 3623, 3624, 3625,
3626, 3627, 3628, 3629, 3630, 3631, 3632, 3633, 3634, 3635, 3636, 3637,
3638, 3639, 3640, 3641, 3642, 3643, 3644, 3645, 3646, 3647, 3648, 3649,
3650, 3651, 3652, 3653, 3654, 3655, 3656, 3657, 3658, 3659, 3660, 3661,
3662, 3663, 3664, 3665, 3666, 3667, 3668, 3669, 3670, 3671, 3672, 3673,
3674, 3675, 3676, 3677, 3678, 3679, 3680, 3681, 3682, 3683, 3684, 3685,
3686, 3687, 3688, 3689, 3690, 3691, 3692, 3693, 3694, 3695, 3696, 3697,
3698, 3699, 3700, 3701, 3702, 3703, 3704, 3705, 3706, 3707, 3708, 3709,
3710, 3711, 3712, 3713, 3714, 3715, 3716, 3717, 3718, 3719, 3720, 3721,
3722, 3723, 3724, 3725, 3726, 3727, 3728, 3729, 3730, 3731, 3732, 3733,
3734, 3735, 3736, 3737, 3738, 3739, 3740, 3741, 3742, 3743, 3744, 3745,
3746, 3747, 3748, 3749, 3750, 3751, 3752, 3753, 3754, 3755, 3756, 3757,
3758, 3759, 3760, 3761, 3762, 3763, 3764, 3765, 3766, 3767, 3768, 3769,
3770, 3771, 3772, 3773, 3774, 3775, 3776, 3777, 3778, 3779, 3780, 3781,
3782, 3783, 3784, 3785, 3786, 3787, 3788, 3789, 3790, 3791, 3792, 3793,
3794, 3795, 3796, 3797, 3798, 3799, 3800, 3801, 3802, 3803, 3804, 3805,
3806, 3807, 3808, 3809, 3810, 3811, 3812, 3813, 3814, 3815, 3816, 3817,
3818, 3819, 3820, 3821, 3822, 3823, 3824, 3825, 3826, 3827, 3828, 3829,
3830, 3831, 3832, 3833, 3834, 3835, 3836, 3837, 3838, 3839, 3840, 3841,
3842, 3843, 3844, 3845, 3846, 3847, 3848, 3849, 3850, 3851, 3852, 3853,
3854, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865,
3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877,
3878, 3879, 3880, 3881, 3882, 3883, 3884, 3885, 3886, 3887, 3888, 3889,
3890, 3891, 3892, 3893, 3894, 3895, 3896, 3897, 3898, 3899, 3900, 3901,
3902, 3903, 3904, 3905, 3906, 3907, 3908, 3909, 3910, 3911, 3912, 3913,
3914, 3915, 3916, 3917, 3918, 3919, 3920, 3921, 3922, 3923, 3924, 3925,
3926, 3927, 3928, 3929, 3930, 3931, 3932, 3933, 3934, 3935, 3936, 3937,
3938, 3939, 3940, 3941, 3942, 3943, 3944, 3945, 3946, 3947, 3948, 3949,
3950, 3951, 3952, 3953, 3954, 3955, 3956, 3957, 3958, 3959, 3960, 3961,
3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971, 3972, 3973,
3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983, 3984, 3985,
3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995, 3996, 3997,
3998, 3999, 4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009,
4010, 4011, 4012, 4013, 4014, 4015, 4016, 4017, 4018, 4019, 4020, 4021,
4022, 4023, 4024, 4025, 4026, 4027, 4028, 4029, 4030, 4031, 4032, 4033,
4034, 4035, 4036, 4037, 4038, 4039, 4040, 4041, 4042, 4043, 4044, 4045,
4046, 4047, 4048, 4049, 4050, 4051, 4052, 4053, 4054, 4055, 4056, 4057,
4058, 4059, 4060, 4061, 4062, 4063, 4064, 4065, 4066, 4067, 4068, 4069,
4070, 4071, 4072, 4073, 4074, 4075, 4076, 4077, 4078, 4079, 4080, 4081,
4082, 4083, 4084, 4085, 4086, 4087, 4088, 4089, 4090, 4091, 4092, 4093,
4094, 4095, 4096, 4097, 4098, 4099, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]) torch.Size([2, 4608])
```
Well, the model pads the sequence from 4098 to 4608 (because it's a multiple of 512, though I did not tell it to do so), and then the ```position_ids``` it ended up with starts with 1 instead of 0 and then there are also 4098 and 4099 in it, which I believe should be out of index? Needs confirmation from developers! Thx!<|||||>@LysandreJik I am pretty sure that ```create_position_ids_from_input_ids()``` in modeling_longformer.py can generate position_ids that are larger than 4097 and lead to out-of-index problem.<|||||>Hey @PxYu - yeah the reason is that even though `Longformer` has a `max_position_embeddings` of 4098 in its official config: https://huggingface.co/allenai/longformer-base-4096, the maximum input it can handle is only `4096` or otherwise the `create_position_ids` function will crash.
This problem is analogues for `Roberta` as discussed here: https://github.com/huggingface/transformers/pull/8044#issuecomment-716513140 .
@LysandreJik - I think it's wrong that Roberta and Longformer (as it was built on Roberta) have `max_position_embeddings` of 514 and 4098 respectively. I think we should definitely change the default `max_position_embeddings` in `RobertaConfig` and `LongformerConfig` and even if it breaks backward compatibility, I would advocate for changing the parameter in the configs of the main models as well. Someone using a `input_ids` of length `max_position_embeddings = 514 or = 4098` would probably have led to errors anyways IMO. These issues will probably happen more often otherwise. What do you think? <|||||>As seen offline, having the `max_position_embeddings` set to 514 is an unfortunate decision we made when implementing the RoBERTa model and inheriting from the BERT model, a behavior we've since changed.
Unfortunately, changing the `max_position_embeddings` would be impossible now, as this would imply modifying the model code so that it handles the embeddings to be of size `max_position_embeddings + 2`, which would break all current existing models. We could reach for an if/else statement analyzing `transformers` version, but this would be very error prone, and would needlessly complicate the code.
We should document two things:
- The `max_position_embeddings` should not be used to create sequences, the tokenizer's `max_len` should be used instead. Since models and tokenizers work in pairs, it is not incoherent to have to rely on the tokenizer attribute to create model sequences.
- We should document that models should try to respect, as much as possible, the convention that `model.max_position_embeddings == tokenizer.max_len` to prevent such confusion from happening in newer models.<|||||>Agree! I think we could however also change the default value of `max_position_embeddings` of `LongformerConfig` and `RobertaConfig` to 4096 and 512 - this should not break anything as people load their saved config via `from_pretrained(...)`.
The big advantage of doing so would be to somewhat prevent future models that built on top of Roberta/Longformer from having these wrong numbers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,076 | closed | [setup] update/add setup targets | updating pip targets
* [x] adds `tokenizer`
* [x] adds `docs` to `dev`, since we need to have the tools to run `make docs`
* [x] adds `flax` to `dev`, since we need to have the libs to run flax tests - except when on windows - it skips it then
* [x] brings `all` up-to-date
@sgugger | 10-27-2020 02:08:20 | 10-27-2020 02:08:20 | Guys, since this is not code but a bunch of definitions please make direct suggestions that can be merged. Thank you.
I originally just wanted to add `docs` and `flax` to `dev` and had no idea about all the other needs.<|||||>I can't make a suggestion that moves stuff around in the file. I can push a commit to your branch if you want, but that's the best I can do.<|||||>I understand. I just don't know to work in this fashion. If you give me a spec I can work to implement it. Otherwise let's just merge this and subsequent PRs can do further improvements. <|||||>Sure, I'll add my comments in a separate PR. Thanks for doing this one!
|
transformers | 8,075 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-27-2020 01:56:04 | 10-27-2020 01:56:04 | README.md file added for gurkan08/bert-turkish-text-classification model |
transformers | 8,074 | closed | Doc styling fixes | # What does this PR do?
This PR fixes a few of the docstrings left in a bad state by #8067 and a small bug in the styling script (it was matching lines of the form `.. note::` or `.. warning::` to kind of examples inside docstrings.
With this, everything should now be fine. | 10-27-2020 00:21:20 | 10-27-2020 00:21:20 | |
transformers | 8,073 | closed | [breaking|pipelines|tokenizers] Adding slow-fast tokenizers equivalence tests pipelines - Removing sentencepiece as a required dependency | # What does this PR do?
**Breaking**: Auto-tokenizers and pipelines:
- switch to `use_fast=True` by default (Fast tokenizers by default)
=> The main expected breaking change is **the handling of overflowing tokens** which is different between slow and fast tokenizers.
- removing sentencepiece from the required dependencies (in some special case this may require you to install `sentencepiece` in addition to the normal install).
Pipelines:
- Add slow/fast tokenizers equivalence tests in pipelines
- upgrade QA/NER processing pipeline to handle fast tokenizers
- remove `test_pipelines_dialog.py` which was a duplicated test file
Tokenizers:
- Update and add new alignement method in `BatchEncoding`
Dependencies:
- upgrade to tokenizers==0.9.4 to allow QA processing with fast tokenizers
- remove sentencepiece from the required dependencies
Misc:
- Fix bug in RobertaFast and test for XLM-Prophetnet and RAG
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-26-2020 22:39:15 | 10-26-2020 22:39:15 | On hold for now until we update the input/output of the alignement methods in `tokenizers` to handle pairs of input sentences following internal discussion with @n1t0 and @Narsil.<|||||>Remaining error on the CI (NER pipeline not working for slow tokenizers) should be solved by #8364
Edit: ok solved now that #8364 is merged<|||||>examples/seq2seq/finetune.py is failing after this PR:
```
CUDA_VISIBLE_DEVICES=0 pytest -sv examples/seq2seq/test_seq2seq_examples.py::TestTheRest::test_finetune_0_patrickvonplaten_t5_tiny_random
```
```
Traceback (most recent call last):
File "finetune.py", line 442, in <module>
main(args)
File "finetune.py", line 409, in main
trainer: pl.Trainer = generic_train(
File "/mnt/nvme1/code/huggingface/transformers-master/examples/lightning_base.py", line 398, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 444, in fit
results = self.accelerator_backend.train()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 63, in train
results = self.train_or_test()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
results = self.trainer.train()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 493, in train
self.train_loop.run_training_epoch()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 554, in run_training_epoch
for batch_idx, (batch, is_last_batch) in train_dataloader:
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/profiler/profilers.py", line 80, in profile_iterable
value = next(iterator)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 46, in _with_is_last
last = next(it)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 519, in __next__
data = self._next_data()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1169, in _next_data
return self._process_data(data)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1195, in _process_data
data.reraise()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/utils.py", line 251, in collate_fn
batch_encoding: Dict[str, torch.Tensor] = self.tokenizer.prepare_seq2seq_batch(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/tokenization_bart_fast.py", line 127, in prepare_seq2seq_batch
model_inputs: BatchEncoding = self(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 2319, in __call__
return self.batch_encode_plus(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 2504, in batch_encode_plus
return self._batch_encode_plus(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2_fast.py", line 167, in _batch_encode_plus
return super()._batch_encode_plus(*args, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 433, in _batch_encode_plus
return BatchEncoding(sanitized_tokens, sanitized_encodings, tensor_type=return_tensors)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 242, in __init__
n_sequences = encoding[0].n_sequences
AttributeError: 'tokenizers.Encoding' object has no attribute 'n_sequences'
Exception ignored in: <function tqdm.__del__ at 0x7f60b55d7b80>
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py", line 1128, in __del__
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py", line 1341, in close
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py", line 1520, in display
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py", line 1131, in __repr__
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py", line 1481, in format_dict
TypeError: cannot unpack non-iterable NoneType object
```<|||||>Hi @stas00, it seems you don't have the latest tokenizer version!<|||||>Bummer! Thank you for identifying that, @LysandreJik - I confirm updating `tokenizers` fixes it.
Since this happened more than once now, would it be possible to add and maintain run-time checks as we have done here:
https://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/examples/lightning_base.py#L38-L47
How a developer is to know that they need to update a dependency of a project otherwise? `git pull` doesn't trigger `pip install -e ".[dev]"`
Except the code above should emit an error on failed check. It should be an error in `lightning_base.py` too, but I couldn't convince @sshleifer to make it so. Warnings are plentiful and this approach of using a warning just doesn't serve its purpose, IMHO.
<|||||>I like the function (though we can expand its name and replace `ver` by `version` inside to make it more readable ;-) ) and I think we could have a dynamic check at init. I don't see any problem with throwing an error if the minimum version of tokenizers isn't installed, but maybe @LysandreJik or @patrickvonplaten have a different opinion.<|||||>Oh, any name works, it was just code first, then I thought that it'll eventually end up being a reusable function, so the half-baked function emerged.
I'm glad you agree that it should assert if the minimal requirements aren't met.
Technically we should then add `packaging` and `pkg_resources` to the project dependencies, but since they are prerequisites for `setuptools` - every user should already have them.
And we will need 2 versions of errors:
* `Try: pip install -r examples/requirements.txt` for examples,
* `Try: pip install transformers -U"` for the core.<|||||>Agree! I actually also already ran into the same issue @stas00 -> I'm definitely in favor of such a function! Should we run the function at `import transformers` already ? <|||||>So did I, and I thought everything was broken until @sgugger showed me the way. Your function would definitely help in that regard @stas00 :)<|||||>> Should we run the function at `import transformers` already ?
Two ways I can see:
1. right where the corresponding import is done - so it's easier to see the specific requirement - but it could be scattered - but it could be more difficult to maintain. Ideally, python would have `import module 1.2.3`, like some other languages have, but it doesn't at the moment.
2. all in one place, at the very top of `__init__.py`, all requirements next to each other so it's easy to maintain. This check relies on packaging tools - i.e. derives it from the `site-packages` dir, so it shouldn't even load the module. i.e. we don't try to look for `xxx.__version__` here, since not all packages have it.
I'd say (2) is the easier way.
Last night I was dreaming of a trigger feature, where if git sees `setup.py` modified it'd alert someone to update `__init__.py` requirements - but it was a dream.
<|||||>I think it can be in any file imported by the `__init__` (as long as the function is executed), so we could also have this in `file_utils.py`. Though the `__init__` is fine by me too if you think it's better.<|||||>Sure, then let's add a dedicated module then? It'd be the most simple/intuitive then
```
$ cat version_requirements.py
require("tokenizers", "1.2.3")
...
```
and `__init__.py`:
```
import .version_requirements
```<|||||>hear, hear! if we have such a file then we can even use it to feed `setup.py`! so we have a single place where we edit all the minimal version requirements and don't need to touch `setup.py` and potentially forget to sync requirements.
In which case I'd use a dict and feed it to `require_version` (or whatever we end up calling it).
Clearly setup has a lot of optional things, so perhaps then we load this file at the end of __init__ and only check versions for the things that got loaded?
or we just test only the package names that we know we need to check, but use that dict for setup.py's needs.
Let me know if these ideas are an overkill.<|||||>Here is a quick prototype to what I'm thinking:
```
$ cat src/transformers/version_requirements.py
min_vers = dict(
tokenizers: "==0.9.4",
tqdm: ">=4.27",
jaxlib: "==0.1.55",
)
run_time_keys = "tokenizers tqdm".split()
for k in run_time_keys:
require_min_ver(k, min_vers[k])
$ cat setup.py
from version_requirements import min_vers
# of course we won't hardcode each entry - this is a just to demonstrate
extras["flax"] = [f"jaxlib{min_vers{'jax_lib']}", ...
```
so you can see the dictionary has all the versions, but we actively check only the versions that are non-optional.
<|||||>One downside of this is that it would move dependencies out of the setup.py (which is where people would expect to see them). Do you think there is a way to structure this so the one place we look at minimum version is the setup? It would be less surprising I think. <|||||>I agree.
We could have all the version requirements defined in `setup.py` and when it's run it'd update `src/transformers/version_requirements.py` instead. Then we would actually want 2 files under transformers - one that `setup.py` will maintain - it will be just a dict dump - so that it could overwrite the file completely and another for the selective run-time checks that would refer to the first file generated by setup, since we will only check a handful of these many dependencies at run time.
```
$ cat setup.py
min_vers = dict(
tokenizers: "==0.9.4",
tqdm: ">=4.27",
jaxlib: "==0.1.55",
)
# add code to dump min_vers dict into `src/transformers/version_requirements.py`
# of course we won't hardcode each entry - this is a just to demonstrate
extras["flax"] = [f"jaxlib{min_vers{'jax_lib']}", ...
$ cat src/transformers/version_requirements.py
# AUTOGENERATED - MODIFY setup.py INSTEAD! #
min_vers = dict(
tokenizers: "==0.9.4",
tqdm: ">=4.27",
jaxlib: "==0.1.55",
)
$ cat src/transformers/version_run_time_check.py
from .version_requirements import min_vers
# define which module versions we always want to check (only a few)
run_time_keys = "tokenizers tqdm".split()
for k in run_time_keys:
require_min_ver(k, min_vers[k])
$ cat src/transformers/__init__.py
import .version_run_time_check
```
This is of course all just a visual prototype.
<|||||>If you want to tackle this, please go ahead with something along these lines. We can refine more on an actual PR.<|||||>OK, I will make a partial sub-set of modules and when you like how it looks expand it to all modules.<|||||>Is there any way to workaround the version check?
I want to use some features from `tokenizers 0.10`, but Transformers raise `VersionConflict`.
Surely, this can cause some very non-obvious bugs, but at least I'll be able to work with my code before the new version of Transformers is released.<|||||>I think this is a very reasonable need, @Guitaricet. But it's probably best to discuss it in a dedicated issue. Could you please file a [feature request](https://github.com/huggingface/transformers/issues/new/choose) and let's see what others would think?
I'd say an env var to override the checks should do the trick. Should be easy to add if the others agree with having it. |
transformers | 8,072 | closed | `BartForConditionalGeneration.from_pretrained` suddenly fails | I have been using the same `BartForConditionalGeneration` model and `transformers==3.0.2` for weeks, but today the same code threw a new error that has never happened before. It says I am passing an unexpected `output_past` parameter to `from_pretrained`. I am loading the `facebook/bart-large` model.
The line throwing the error is `BartForConditionalGeneration.from_pretrained("facebook/bart-large", output_past=True)`
```
/usr/local/lib/python3.6/dist-packages/qaeval/generation/model.py in __init__(self, vocab, model_name, max_decoding_steps, beam_size)
59 beam_size: int = 4) -> None:
60 super().__init__(vocab)
---> 61 self.bart = BartForConditionalGeneration.from_pretrained(model_name, output_past=True)
62 self.tokenizer = PretrainedTransformerTokenizer(model_name)
63
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
670
671 # Instantiate model.
--> 672 model = cls(config, *model_args, **model_kwargs)
673
674 if state_dict is None and not from_tf:
TypeError: __init__() got an unexpected keyword argument 'output_past'
```
I am running this code in a Jupyter Notebook. This morning the code ran (I have all of the output saved). When I make a copy of the notebook and rerun it now without making any changes, it fails with the above error. I see that the `output_past` parameter was removed [here](https://github.com/huggingface/transformers/pull/3632/files/904b387af42744f9141a6dc4be698a5815ce5bbd), but that does not explain why it was working up until just a few hours ago.
I can see in the saved output between the two notebooks that one of the files that transformers first downloads when loading the model used to be 1.26kb in size, but now it's 1.52k. I assume this is the config file for `facebook/bart-large`.
Did anything change to that file within the past few hours? I don't know where to look for the copy of that file to see what happened. I am quite perplexed by this issue. | 10-26-2020 22:35:23 | 10-26-2020 22:35:23 | I am pretty sure my guess at what happened was correct.
On one of my machines, I have the bart-large/config.json from October 12 with etag "40bd49bcec9d93d8b0bfbd020088e2e1b6e6bb03e8e80aa5144638f90ca6bd61" and it is 1.26kb. It contains an entry `"output_past": false`. Today, there is a file with a new etag "8b65d3b9a47e96c1909d807f7e7f41dd1ed95092b139965be7b914aa4fb5fd08" and it is 1.52kb. It does not contain any `output_past` entry.
What could I have done to prevent this from happening? Is there a way to specific a specific version of the models?<|||||>I can confirm your diagnostic is correct.
You can check the diff between the two versions with:
```
curl -i https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large/config.json?versionId=PFecmBwmg83YUwpv_kkc3kBzoCGebvu7
curl -i https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large/config.json?versionId=JhIFsOvvLtrLn0vJjGNN6ZhJGUlbXBEP
```
I'll let @sshleifer and @patrickvonplaten chime in about the actual file change, but to answer your last question:
> What could I have done to prevent this from happening? Is there a way to specific a specific version of the models?
We will roll out a way to specify specific versions of models in the near future.<|||||>I mistakenly changed the `config`, my fault.
Can you pass `output_past` to `__init__` or do you need me to add back the `output_past` key?<|||||>I was able to just remove the flag and it works with the updated config. To be honest, I don't know what the flag did -- I modified someone else's model which used it.
Thanks for looking into this. This issue can be closed |
transformers | 8,071 | closed | [All Seq2Seq model + CLM models that can be used with EncoderDecoder] Add cross-attention weights to outputs | # What does this PR do?
This PR causes models that support cross-attention to output the cross-attention tensors as well as the self-attention tensors when output_attentions is set
(from @patrickvonplaten)
This PR adds cross-attention outptus to all Seq2Seq models and to CLM models compatible with the `EncoderDecoderModel` framework.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @patrickvonplaten @Bharat123rox
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-26-2020 20:48:26 | 10-26-2020 20:48:26 | Hey @ysgit - I like the idea! However, I think we should actually create a new output tuple for the cross-attention (called `cross_attention` when `return_dict`=True). For this we will have to create new ModelOutputs for Bert, RoBERTa, etc...
<|||||>@patrickvonplaten that makes sense, I guess I suspected that might be the reaction, I'll see if that's something I can manage although I'm a little hampered by not being able to get the test suite to successfully run locally<|||||>@patrickvonplaten I have done as you suggested and separated cross attentions into a new variable in the output. please take a look and let me know what you think. many thanks!<|||||>In a future PR or a "Good First Issue", we could add this functionality to the TFSeq2Seq models as well, but I'm a bit hesitant given the current problems with the `output_attentions` flag in TF.
What do you think @sgugger @LysandreJik @jplu ?<|||||>This should be easier to integrate in TF in the next release :)<|||||>Thanks everyone! |
transformers | 8,070 | closed | Pretraining for encoder of TF T5 model | Hi, for sequence to sequence task in tensorflow, I see that I can use TFT5 model. However, before sequence to sequence training, I need to perform masked language model pretraining on the encoder and initialise the weights of the encoder and decoder with the same weights that I obtain by masked language model pretraining. Is there a way to do this in the tensorflow version of T5?
I could do that with the EncoderDecoder, but it is only supported in PyTorch and not tensorflow. | 10-26-2020 20:36:51 | 10-26-2020 20:36:51 | @dharakotecha, you would need to create a MaskedLM head similar to distilbert. You would also need to either use one of the masekd token <extra_id_0> or add a new token for mask.
Some experimentation on the t5 encoder shows that it does produce pretty reasonable output, but does not do MLM at the encoder side. The cloze/fill in the blank happens in the decoder side.
This is just quick and dirty. You would need to create a module similar to a MLM for distilbert with the loss, feeding in the output from the encoder into the projector:
```
t5 = AutoModel.from_pretrained('t5-small').cuda()
tokenizer = AutoTokenizer.from_pretrained('t5-small')
encoder = t5.encoder
embeddings = encoder.embed_tokens
config = t5.config
#tied the embedding to the projector.
vocab_projector = nn.Linear(config.d_model, config.vocab_size).cuda()
vocab_projector.weight = embeddings.weightvocab_projector = nn.Linear(config.d_model, config.vocab_size).cuda()
vocab_projector.bias.data = torch.nn.functional.pad(vocab_projector.bias.data, (0, vocab_projector.weight.shape[0] - vocab_projector.bias.shape[0],), "constant", 0)
def pred(predictions):
for pred in predictions:
print ("**")
sorted_preds, sorted_idx = pred.sort(dim=-1, descending=True)
ret = []
for k in range(2):
predicted_index = [sorted_idx[i, k].item() for i in range(0,len(predictions[0]))]
predicted_token = ' '.join([tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,len(predictions[0]))]).replace('Ġ', ' ').replace(' ', ' ').replace('##', '')
ret.append(predicted_token)
return ret
input_txt = ["</s>Lincoln was an American president and lawyer"]
inputs = tokenizer(input_txt, return_tensors='pt', add_special_tokens=True, padding=True)
predictions = vocab_projector(encoder(inputs.input_ids.cuda())[0])
pred(predictions)
```
Outputs:
['\u2581Lincoln \u2581was \u2581psiho American \u2581President & \u2581lawyer </s>', '\u2581senzati tais Mitglied \u2581American \u2581president \u2581and \u2581Lawyer \u2581summarize']**
But, using the extra mask, will not infer the missing token.
```
input_txt = ["</s>Lincoln <extra_id_0> American president and lawyer"]
inputs = tokenizer(input_txt, return_tensors='pt', add_special_tokens=True, padding=True)
predictions = vocab_projector(encoder(inputs.input_ids.cuda())[0])
pred(predictions)
```
Wil output:
['\u2581Lincoln <extra_id_0> \u2581American \u2581president & \u2581lawyer </s>', '\u2581Abraham \u2581botez American \u2581President \u2581and \u2581Lawyer gasesc']**
Hope this helps,
<|||||>I just realized that you asking about tensor flow too... I'm assuming you could do the similar thing but creating a MLM head in tesnor flow just for the encoder.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,069 | closed | DEP: pinned sentencepiece to 0.1.91 in setup.py to fix build issues with newer versions | # What does this PR do?
Pins `sentencepiece` to `0.1.91` to resolve build issues with newer versions
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/8020
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-26-2020 20:28:20 | 10-26-2020 20:28:20 | I can check the other 3 boxes if needed. They didn't seem to apply to this particular PR so I left them unchecked.
|
transformers | 8,068 | closed | seq2seq/finetune.py: remove useless check | Removed on built-in already
```python
self.dataset_class = (
Seq2SeqDataset if hasattr(self.tokenizer, "prepare_seq2seq_batch") else LegacySeq2SeqDataset
)
``` | 10-26-2020 20:13:04 | 10-26-2020 20:13:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,067 | closed | Doc styling | # What does this PR do?
This PR introduces a doc styling script and applies it to the repo. The styling script runs similarly to black, with a an option that fixes and overwrites the files (put inside `make style`) and an option that only checks if there should be a restyle, failing with an error if that's the case (put inside `make quality`).
The script is applied to all rst files inside `docs/source` and all py files inside `src/transformers`. It will look for paragraphs and always reorganize them to use the most of the `max_len` passed (set at 119 for the repo, like for the code). It will remove all duplicate or trailing whitespace, make all blank lines empty, ignore blocks of code/math and properly take care of the indentation.
A few extra things are performed:
- making the underline of the titles in rst to the `max_len` and always adding a blank line after those titles.
- unifying the format of the triple docstrings in the files
- always adding a new line before the beginning of a list (because sphinx sometimes complains otherwise)
To make the script ignore a string inside triple quotes (like warnings or long regex expressions), put a `# docstyle-ignore` somewhere before (it has to be between the previous triple quotes and the ones of the string you want to ignore).
In general, if the script reformats atrociously a docstring, it was because it was badly formatted. Adding a blank line to clearly mark paragraphs can make the script happier. Properly indenting list of arguments (see examples on any of the files of the lib) is also important to get good outputs.
| 10-26-2020 19:31:58 | 10-26-2020 19:31:58 | A few loose-ends to tie, but that will be for tomorrow! |
transformers | 8,066 | closed | Missing Import | When trying to run this file:
https://github.com/huggingface/transformers/blob/3a10764574f252591eeaa5bbb10b778f623a4814/examples/language-modeling/run_language_modeling.py#L40
The following error occurs:
````
Traceback (most recent call last):
File "run_language_modeling.py", line 32, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForWholeWordMask'
````
I attempted to run the following:
````
import transformers
for i in dir(transformers):
if "data" in i.lower():
print (i)
````
And I got the following:
````
CsvPipelineDataFormat
DataCollator
DataCollatorForLanguageModeling
DataCollatorForNextSentencePrediction
DataCollatorForPermutationLanguageModeling
DataCollatorForSOP
DataCollatorWithPadding
DataProcessor
GlueDataTrainingArguments
GlueDataset
JsonPipelineDataFormat
LineByLineTextDataset
LineByLineWithSOPTextDataset
PipedPipelineDataFormat
PipelineDataFormat
SquadDataTrainingArguments
SquadDataset
TextDataset
TextDatasetForNextSentencePrediction
data
default_data_collator
is_datasets_available
````
It appears that the **DataCollatorForWholeWordMask** is not a part of transformers for some reason.
I commented it out and this one also appears to have an issue (I'll list all that complain at the bottom of this post and edit it as I find more)
At this point I commented the below imports out and it is running (well it's downloading one of the models I believe). I'll update if it fails/succeeds.
Missing Imports from transformers:
* DataCollatorForWholeWordMask
* LineByLineWithRefDataset
| 10-26-2020 18:53:56 | 10-26-2020 18:53:56 | Remember that the examples you pull from master need a [source install](https://huggingface.co/transformers/installation.html#installing-from-source). If you want the version that runs with the last release, you need to use that tag, here are the [examples for the last release](https://github.com/huggingface/transformers/releases/tag/v3.4.0) (v3.4.0).<|||||>You make a compelling argument :)
I did a checkout of the v3.4.0 release, and besides dying from a CUDA out of memory, it appears to be working :) |
transformers | 8,065 | closed | load 'microsoft/unilm-base-cased' failed | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I use the following code in https://huggingface.co/microsoft/unilm-base-cased to load the model.
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/unilm-base-cased")
model = AutoModel.from_pretrained("microsoft/unilm-base-cased")
```
And I got the traceback like this
tokenizer = AutoTokenizer.from_pretrained("microsoft/unilm-base-cased")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\tokenization_auto.py", line 298, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\configuration_auto.py", line 341, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag
>>>
>>> model = AutoModel.from_pretrained("microsoft/unilm-base-cased")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_auto.py", line 623, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\configuration_auto.py", line 341, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-26-2020 18:48:01 | 10-26-2020 18:48:01 | Hello
Same problem for us.
Do you plan to investigate / fix it ?
Cheers
Philippe<|||||>The UniLM model has not been released in the library yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,064 | closed | [QOL] PretrainedConfig.to_diff_dict(other_config) | This PR allows users to compare two configs in a backwards compatible way.
### Solution
The following calls work:
```python
bart_large_config = BartConfig.from_pretrained('facebook/bart-large')
bart_base_config = BartConfig.from_pretrained('facebook/bart-base')
t5_config = T5Config.from_pretrained('t5-small')
bart_large_config.to_diff_dict() # unchanged
bart_large_config.to_diff_dict(bart_base_config) # compares configs
bart_large_config.to_diff_dict(t5_config) # can be across subtypes
bart_large_config.to_diff_dict(bart_base_config.to_dict()) # can be against dict
```
Adds test that outputs are reasonable.
### Problem
Current best way to compare configs is to define your own function. Here is the one I use. Also good for debugging conversion scripts:
```python
def dct_differences(dct_a, dct_b):
SENTINEL = '__MissingKey'
k1, k2 = set(dct_a), set(dct_b) # just the keys
deltas = []
for k in k1.union(k2):
vala, valb = dct_a.get(k, SENTINEL), dct_b.get(k, SENTINEL)
# TODO(SS): nested dicts? Maybe better to dump to json and compare (after sorting keys!)
if vala == valb:
if (vala == SENTINEL and valb == SENTINEL): raise AssertionError('Adversarial Sentinel Input!')
else:
deltas.append((k, vala, valb))
return deltas
bart_large_config = BartConfig.from_pretrained('facebook/bart-large')
bart_base_config = BartConfig.from_pretrained('facebook/bart-base')
delta = dct_differences(bart_large_config.to_dict(), bart_base_config.to_dict())
```
this implementation is almost as useful without breaking backwards compatibility. | 10-26-2020 18:48:00 | 10-26-2020 18:48:00 | Didn't work. Will re-open when I have something better. |
transformers | 8,063 | closed | Fix TF training arguments instantiation | Check that pytorch is installed before checking the device type. | 10-26-2020 18:38:06 | 10-26-2020 18:38:06 | |
transformers | 8,062 | closed | Add AzureML in integrations via dedicated callback | # What does this PR do?
I propose this PR to allow transformers to call AzureML logging using https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py
The intended behaviour is to enable `transformers` users to track metrics in the AzureML UI in this fashion

Contributors to https://github.com/microsoft/AzureML-BERT and folks @microsoft may well come up with a better implementation though!
I am glad to improve the following if reviewers like the idea, and update docs and tests if needed.
@reviewers feel free to add any suggestions as my contributions to transformers have been very limited so far :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes, check for detais https://discuss.huggingface.co/t/how-to-integrate-an-azuremlcallback-for-logging-in-azure/1713/4
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@julien-c is aware of this and @sgugger participated in the thread on forums above and implemented callbacks with https://github.com/huggingface/transformers/pull/7596
| 10-26-2020 18:34:20 | 10-26-2020 18:34:20 | Code changes pass tests @sgugger ! Thanks @alvarobartt for having a look at those also.
I guess I need to take care of docs/source/main_classes/callback.rst as well to complete the checklist though?<|||||>Yes, if you could add a line to the rst file, that would be great!<|||||>You might need to rebase to have the latest script for doc formatting, which should do everything to make the CI happy with just `make style`. Let me know if you need help.<|||||>Sorry for the somewhat messy hacktoberfest @sgugger !
I wasn't so aware of rst idiosyncrasies, and ways to diagnose issues, so I struggled a bit. Should be in order now (bonus, I fixed a typo on the way https://github.com/huggingface/transformers/pull/8062/commits/286f20c0594c3d16c824963c24c8fb1bc1d43bc6) |
transformers | 8,061 | closed | Doc fixes in preparation for the docstyle PR | # What does this PR do?
This PR fixes a few docstrings and adds the `# docstyle-ignore` marker where necessary in preparation for the big docstyle PR. | 10-26-2020 18:16:47 | 10-26-2020 18:16:47 | |
transformers | 8,060 | closed | a multitude of deprecations for pytorch-1.7+ | This is not urgent. There is a ton of deprecation warnings across many modules with pytorch-1.7+ and a few with python-3.8:
(I hard-wrapped the lines to avoid the need to scroll, but it makes somewhat harder to see the warnings):
```
src/transformers/modeling_deberta.py:18 src/transformers/modeling_deberta.py:18
src/transformers/modeling_deberta.py:18 src/transformers/modeling_deberta.py:18
src/transformers/modeling_deberta.py:18
src/transformers/modeling_deberta.py:18:
DeprecationWarning: Using or importing the ABCs from 'collections' instead of
from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop
working from collections import Sequence
tests/test_logging.py::HfArgumentParserTest::test_integration
tests/test_logging.py:40:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(msg)
tests/test_logging.py::HfArgumentParserTest::test_integration
tests/test_logging.py:48:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(msg)
tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_gpt2.py:164:
TracerWarning: Converting a tensor to a Python float might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! w = w / (float(v.size(-1)) ** 0.5)
tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_gpt2.py:169:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! mask = self.bias[:, :, ns - nd : ns, :ns]
tests/test_modeling_auto.py::AutoModelTest::test_from_identifier_from_model_type
tests/test_modeling_auto.py::AutoModelTest::test_from_pretrained_identifier
src/transformers/modeling_auto.py:821:
FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed
in a future version. Please use `AutoModelForCausalLM` for causal language
models, `AutoModelForMaskedLM` for masked language models and
`AutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn(
tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs
tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:432:
UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape.
This may consume a large amount of memory. warnings.warn(
tests/test_modeling_albert.py::AlbertModelTest::test_torchscript
tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_attentions
tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_albert.py:229:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_albert.py: 3 warnings tests/test_modeling_bert.py: 3
warnings tests/test_modeling_bert_generation.py: 3 warnings
tests/test_modeling_distilbert.py: 2 warnings tests/test_modeling_dpr.py: 3
warnings tests/test_modeling_flaubert.py: 3 warnings
tests/test_modeling_electra.py: 3 warnings tests/test_modeling_layoutlm.py: 3
warnings tests/test_modeling_roberta.py: 3 warnings tests/test_modeling_xlm.py:
3 warnings tests/test_modeling_xlnet.py: 3 warnings
src/transformers/modeling_utils.py:1670:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! input_tensor.shape == tensor_shape for input_tensor
in input_tensors
tests/test_modeling_bert_generation.py: 32 warnings
src/transformers/modeling_bert_generation.py:417:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn("If you want to use `BertGenerationDecoder` as a standalone, add
`is_decoder=True.`")
tests/test_modeling_bert.py::BertModelTest::test_torchscript
tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions
tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_dpr.py::DPRModelTest::test_torchscript
tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_attentions
tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bert.py:191:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:175:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if decoder_padding_mask is not None and
decoder_padding_mask.shape[1] > 1:
tests/test_modeling_bart.py: 3 warnings tests/test_modeling_flaubert.py: 3
warnings tests/test_modeling_fsmt.py: 3 warnings tests/test_modeling_roberta.py:
3 warnings tests/test_modeling_xlm.py: 3 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/functional.py:1836:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert padding_idx < weight.size(0), 'Padding_idx
must be within num_embeddings'
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:720:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert key_padding_mask is None or
key_padding_mask.shape == (bsz, src_len)
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:722:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_weights.size() == (bsz * self.num_heads,
tgt_len, src_len)
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:740:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_output.size() == (bsz * self.num_heads,
tgt_len, self.head_dim)
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:287:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if torch.isinf(x).any() or torch.isnan(x).any():
tests/test_modeling_bart.py::BARTModelTest::test_torchscript
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_bart.py:1190:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! if len(torch.unique(eos_mask.sum(1))) > 1:
tests/test_modeling_common.py::UtilsFunctionsTest::test_top_k_top_p_filtering
tests/test_modeling_common.py:1196:
UserWarning: This overload of nonzero is deprecated: nonzero() Consider using
one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered
internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
non_inf_idx = (output != -float("inf")).nonzero().to(device=torch_device)
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_attentions
tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_hidden_state
src/transformers/modeling_bert_generation.py:156:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_flaubert.py: 14 warnings tests/test_modeling_xlm.py: 14
warnings
src/transformers/modeling_xlm.py:1220:
FutureWarning: The `lengths` parameter cannot be used with the XLM multiple
choice models. Please use the attention mask instead. warnings.warn(
tests/test_modeling_flaubert.py::FlaubertModelTest::test_flaubert_lm_head
tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
tests/test_modeling_xlm.py::XLMModelTest::test_model_outputs_equivalence
tests/test_modeling_xlm.py::XLMModelTest::test_xlm_lm_head
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/_reduction.py:14:
UserWarning: reduction='elementwise_mean' is deprecated, please use
reduction='mean' instead. warnings.warn("reduction='elementwise_mean' is
deprecated, please use reduction='mean' instead.")
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_flaubert.py:188:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.size(0) == bs
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_flaubert.py:189:
TracerWarning: Converting a tensor to a Python number might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_flaubert.py:189:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:95:
TracerWarning: Converting a tensor to a Python number might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:95:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:106:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert mask.size() == (bs, slen)
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:1224:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if max_pos > self.weight.size(0):
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:763:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert embed_dim == self.embed_dim
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:764:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert list(query.size()) == [tgt_len, bsz,
embed_dim]
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:805:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_weights.size() == (bsz * self.num_heads,
tgt_len, src_len)
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:814:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert key_padding_mask is None or
key_padding_mask.size()[:2] == (
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_fsmt.py:833:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert attn_output.size() == (bsz * self.num_heads,
tgt_len, self.head_dim)
tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_model_att_mask_past
tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_model_past
tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_model_past_large_inputs
src/transformers/modeling_gpt2.py:530:
FutureWarning: The `past` argument is deprecated and will be removed in a future
version, use `past_key_values` instead. warnings.warn(
tests/test_modeling_electra.py::ElectraModelTest::test_torchscript
tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_attentions
tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_electra.py:180:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript
tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_attentions
tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_layoutlm.py:87:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/tensor.py:547:
TracerWarning: torch.tensor results are registered as constants in the trace.
You can safely ignore this warning if you use this function to create tensors
out of constant variables that would be the same every time you call this
function. In any other case, this might cause the trace to be incorrect. return
torch.tensor(other, dtype=dtype, device=self.device) ** self
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:314:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! num_remove = shift * len(pooled_pos)
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:638:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! pooling_flag = pooling_flag and block_index > 0
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:481:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! shift = 2 if q_head.shape[1] != context_len else 1
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:431:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! positional_attn = positional_attn[..., :context_len]
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_funnel.py:678:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! output = output[:, : target_len - 1]
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_gpt2.py:1058:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! pooled_logits = logits[range(batch_size),
sequence_lengths]
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:467:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[None, :
input_shape[-1]]
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:180:
TracerWarning: Converting a tensor to a Python float might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! w = w / math.sqrt(v.size(-1))
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:183:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! b = self.bias[:, :, : w.size(-2), : w.size(-1)]
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_openai.py:823:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! pooled_logits = logits[range(batch_size),
sequence_lengths]
tests/test_modeling_rag.py: 12 warnings tests/test_retrieval_rag.py: 1 warning
src/transformers/tokenization_utils_base.py:613:
UserWarning: To copy construct from a tensor, it is recommended to use
sourceTensor.clone().detach() or
sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor). tensor = as_tensor(value)
tests/test_modeling_reformer.py: 58 warnings tests/test_modeling_transfo_xl.py:
18 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/modules/container.py:434:
UserWarning: Setting attributes on ParameterList is not supported.
warnings.warn("Setting attributes on ParameterList is not supported.")
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_mobilebert.py:192:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_mobilebert.py:534:
TracerWarning: torch.tensor results are registered as constants in the trace.
You can safely ignore this warning if you use this function to create tensors
out of constant variables that would be the same every time you call this
function. In any other case, this might cause the trace to be incorrect.
torch.tensor(1000),
tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_reformer_cached_inference
src/transformers/modeling_reformer.py:899:
UserWarning: This overload of nonzero is deprecated: nonzero() Consider using
one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered
internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
relevant_bucket_idx = (bucket_idx == (bucket_idx.shape[-1] - 1)).nonzero()
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_attentions
tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_utils.py:244:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]:
tests/test_modeling_t5.py: 95 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/onnx/utils.py:760:
DeprecationWarning: an integer is required (got type
torch._C._onnx.TensorProtoDataType). Implicit conversion to integers using
__int__ is deprecated, and may be removed in a future version of Python.
return getattr(node, kind + "_")(name, value)
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:1638:
DeprecationWarning: an integer is required (got type float). Implicit conversion
to integers using __int__ is deprecated, and may be removed in a future version
of Python. value_t=torch.tensor([fill_value],
dtype=sym_help.scalar_type_to_pytorch_type[dtype]))
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_identifier_from_model_type
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_pretrained_identifier
src/transformers/modeling_tf_auto.py:697:
FutureWarning: The class `TFAutoModelWithLMHead` is deprecated and will be
removed in a future version. Please use `TFAutoModelForCausalLM` for causal
language models, `TFAutoModelForMaskedLM` for masked language models and
`TFAutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn(
tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_attentions
tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_squeezebert.py:78:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :seq_length]
tests/test_modeling_tf_flaubert.py: 9 warnings tests/test_modeling_tf_xlm.py: 9
warnings
src/transformers/modeling_tf_xlm.py:994:
FutureWarning: The `lengths` parameter cannot be used with the XLM multiple
choice models. Please use the attention mask instead. warnings.warn(
tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_graph_mode
tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_graph_mode
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:493:
FutureWarning: The `lengths` parameter cannot be used with the XLM multiple
choice models. Please use the attention mask instead. return
py_builtins.overload_of(f)(*args)
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_config
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_keras_save_load
tests/test_modeling_xlnet.py::XLNetModelTest::test_config
tests/test_modeling_xlnet.py::XLNetModelTest::test_correct_missing_keys
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_save_load
tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_train_pipeline_custom_model
tests/test_modeling_xlnet.py::XLNetModelTest::test_save_load
src/transformers/configuration_xlnet.py:205:
FutureWarning: This config doesn't use attention memories, a core feature of
XLNet. Consider setting `mem_len` to a non-zero value, for example `xlnet =
XLNetLMHeadModel.from_pretrained('xlnet-base-cased'', mem_len=1024)`, for
accurate training performance as well as an order of magnitude faster inference.
Starting from version 3.5.0, the default parameter will be 1024, following the
implementation in https://arxiv.org/abs/1906.08237 warnings.warn(
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:531:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.size(0) == bs
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:532:
TracerWarning: Converting a tensor to a Python number might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:532:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to
be incorrect. We can't record the data flow of Python values, so this value will
be treated as a constant in the future. This means that the trace might not
generalize to other inputs! assert lengths.max().item() <= slen
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
src/transformers/modeling_xlm.py:546:
TracerWarning: Converting a tensor to a Python index might cause the trace to be
incorrect. We can't record the data flow of Python values, so this value will be
treated as a constant in the future. This means that the trace might not
generalize to other inputs! position_ids = self.position_ids[:, :slen]
tests/test_optimization.py::OptimizationTest::test_adafactor
src/transformers/optimization.py:512:
UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor
other) Consider using one of the following signatures instead: add_(Tensor
other, *, Number alpha) (Triggered internally at
/pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq.mul_(beta2t).add_(1.0 - beta2t, update)
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:247:
UserWarning: To get the last learning rate computed by the scheduler, please
use `get_last_lr()`. warnings.warn("To get the last learning rate computed by
the scheduler, "
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131:
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`.
In PyTorch 1.1.0 and later, you should call them in the opposite order:
`optimizer.step()` before `lr_scheduler.step()`. Failure to do this will
result in PyTorch skipping the first value of the learning rate schedule. See
more details at
https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before
`optimizer.step()`. "
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:216:
UserWarning: Please also save or load the state of the optimizer when saving
or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning)
tests/test_optimization.py::ScheduleInitTest::test_schedulers
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:234:
UserWarning: Please also save or load the state of the optimizer when saving
or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning)
tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_with_correct_config
tests/test_tokenization_mbart.py::MBartEnroIntegrationTest::test_batch_fairseq_parity
tests/test_tokenization_t5.py::T5TokenizationTest::test_empty_target_text
tests/test_tokenization_t5.py::T5TokenizationTest::test_eos_in_input
tests/test_tokenization_t5.py::T5TokenizationTest::test_max_target_length
tests/test_tokenization_t5.py::T5TokenizationTest::test_outputs_not_longer_than_maxlen
tests/test_tokenization_t5.py::T5TokenizationTest::test_prepare_seq2seq_batch
src/transformers/tokenization_utils_base.py:1421:
FutureWarning: The `max_len` attribute has been deprecated and will be removed
in a future version, use `model_max_length` instead. warnings.warn(
tests/test_tokenization_albert.py: 2 warnings tests/test_tokenization_bart.py: 2
warnings tests/test_tokenization_bert.py: 2 warnings
tests/test_tokenization_bert_generation.py: 1 warning
tests/test_tokenization_bertweet.py: 1 warning
tests/test_tokenization_blenderbot.py: 1 warning
tests/test_tokenization_ctrl.py: 1 warning tests/test_tokenization_camembert.py:
2 warnings tests/test_tokenization_distilbert.py: 4 warnings
tests/test_tokenization_dpr.py: 8 warnings tests/test_tokenization_fsmt.py: 1
warning tests/test_tokenization_funnel.py: 2 warnings
tests/test_tokenization_herbert.py: 2 warnings tests/test_tokenization_gpt2.py:
1 warning tests/test_tokenization_layoutlm.py: 2 warnings
tests/test_tokenization_marian.py: 1 warning tests/test_tokenization_lxmert.py:
2 warnings tests/test_tokenization_mbart.py: 2 warnings
tests/test_tokenization_pegasus.py: 2 warnings
tests/test_tokenization_openai.py: 1 warning tests/test_tokenization_phobert.py:
1 warning tests/test_tokenization_deberta.py: 1 warning
tests/test_tokenization_prophetnet.py: 1 warning
tests/test_tokenization_reformer.py: 1 warning
tests/test_tokenization_squeezebert.py: 4 warnings
tests/test_tokenization_t5.py: 2 warnings tests/test_tokenization_roberta.py: 2
warnings tests/test_tokenization_transfo_xl.py: 1 warning
tests/test_tokenization_xlm.py: 1 warning
tests/test_tokenization_xlm_prophetnet.py: 1 warning
tests/test_tokenization_xlnet.py: 2 warnings
tests/test_tokenization_xlm_roberta.py: 2 warnings
src/transformers/tokenization_utils_base.py:2025:
FutureWarning: The `pad_to_max_length` argument is deprecated and will be
removed in a future version, use `padding=True` or `padding='longest'` to pad to
the longest sequence in the batch, or use `padding='max_length'` to pad to a max
length. In this case, you can give a specific length with `max_length` (e.g.
`max_length=45`) or leave max_length to None to pad to the maximal input size of
the model (e.g. 512 for Bert). warnings.warn(
tests/test_tokenization_t5.py::T5TokenizationTest::test_eos_in_input
tests/test_tokenization_t5.py::T5TokenizationTest::test_eos_treatment
src/transformers/tokenization_t5.py:183:
UserWarning: This sequence already has </s>. In future versions this behavior
may lead to duplicated eos tokens being added. warnings.warn(
tests/test_trainer.py: 44 warnings
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64:
UserWarning: Was asked to gather along dimension 0, but all input tensors were
scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked
to gather along dimension 0, but all '
tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training
tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow
/home/stas/anaconda3/envs/py38-pt17/lib/python3.8/site-packages/torch/cuda/nccl.py:48:
DeprecationWarning: Using or importing the ABCs from 'collections' instead of
from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop
working if not isinstance(inputs, collections.Container) or isinstance(inputs,
torch.Tensor):
-- Docs: https://docs.pytest.org/en/stable/warnings.html
```
@LysandreJik | 10-26-2020 17:27:46 | 10-26-2020 17:27:46 | ?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,059 | closed | infer entailment label id on zero shot pipeline | # What does this PR do?
Adds an optional argument to the zero shot pipeline constructor to specify the label id of the NLI model that corresponds to "entailment", which it needs to calculate each candidate label's score. Most models in the hub use the last last label id, but some differ (e.g. the recent [ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli](https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli)).
If the argument is not passed, the pipeline will attempt to look up the entailment dimension in the model config's id2label mapping. If the config does not specify the entailment dimension, the value will be set to `-1`, indicating the last dimension of the model output.
With this logic in place, the arg only needs to be passed when both (1) the model's entailment label id is not the last id and (2) when the model config's `label2id` doesn't specify the entailment id. | 10-26-2020 17:24:00 | 10-26-2020 17:24:00 | Wouldn't it be better to:
- ask the model authors for the relevant models to update their config.json to include a `id2label`
- and/or to modify them automatically for them on the hub?
I feel this PR is adding a feature that works around an issue when we should be fixing the root issue (also fixes the displayed labels, other future features, etc.). Wdyt?<|||||>@julien-c Well, to be clear this PR ~~does~~ did two things:
1. Switches from always using the last index to using the index determined by looking in the model config if present
2. Gives the user the option to manually override the index in case the information isn't present in the config
If I understand correctly, your issue is just with (2). I think it's a fair point. I don't think we'll be able to ensure that [all NLI models](https://huggingface.co/models?search=nli) have a clearly defined label mapping though. But instead of an override arg, I think it might be better to just add a warning if the entailment label ID can't be found in the config.<|||||>@joeddav Ok yes 1/ is great.
> I don't think we'll be able to ensure that [all NLI models](https://huggingface.co/models?search=nli) have a clearly defined label mapping though.
Why not?<|||||>@julien-c Just because there are almost 100 results for ["NLI"](https://huggingface.co/models?search=nli) on the model hub and I'd guess from a quick sampling that the majority don't have a label mapping defined. For each model we'd have to figure out which label is which, which would mean either getting the author to look it up and tell us or else running tests on the correct dataset to figure it ourselves.
Do you think it'd be worthwhile to warn the user when uploading or creating configs with generic/missing label mappings (or with any other important fields missing) going forward? Defining a label2id seems like a rather obscure property that I would assume is purely cosmetic if I were uploading a model, i.e. I wouldn't expect it to actually impact code behavior for someone using my model. |
transformers | 8,058 | closed | [testing] port test_trainer_distributed to run with pytest | Extracting the request from https://github.com/huggingface/transformers/pull/7993#issuecomment-716508513 to this issue to make it easier to track.
Now that we have a framework to run distributed under `pytest` `test_trainer_distributed` needs to be ported there.
I will work on that.
| 10-26-2020 17:18:49 | 10-26-2020 17:18:49 | Ooops, closed it by mistake too early. But it has been resolved here:
https://github.com/huggingface/transformers/pull/8107 |
transformers | 8,057 | closed | [testing] fixing crash in deberta | This PR fixes a crash in deberta tests w/ pytorch-1.7+:
```
RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda- \
bld/pytorch_1603436966316/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
```
All credits go to @gchanan, thank you! For details of why please see https://github.com/huggingface/transformers/issues/8022#issuecomment-716252599
Fixes: #8022 | 10-26-2020 16:53:30 | 10-26-2020 16:53:30 | p.s. apparently there has been a deprecation warning with pytorch-1.6, but since this project isn't quite paying attention to the warnings it lead to this crash with pytorch-1.7. It seems to be a weird situation where a deprecation warning hasn't been turned into an error and instead leading to a crash with this particular issue, but perhaps setting an intention to keep the warnings in check would save a possible hassle in the future. https://github.com/huggingface/transformers/issues/8060
Perhaps what would help is to automatically turn selective types of warnings to errors, and thus not let those slide until they become a problem.
Also having a scheduled CI that runs occasionally on pytorch-nightly (and any release candidates) would give an early alert. |
transformers | 8,056 | closed | [TF] from_pt should respect authorized_unexpected_keys | This comes in handy when
```
"model.encoder.embed_tokens.weight",
"model.decoder.embed_tokens.weight",
```
are in the PT state dict but not the TF symbolic weights.
| 10-26-2020 16:44:52 | 10-26-2020 16:44:52 | |
transformers | 8,055 | closed | BertEncoder has no attribute 'bias' when convert tf checkpoint | # ❓ Questions & Help
BertEncoder object has no attribute 'bias' when convert tf checkpoint
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I tried to convert my pretrained BERT tf checkpoint but I got this error:
```
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in load_tf_weights_in_bert(model, config, tf_checkpoint_path)
133 pointer = getattr(pointer, "weight")
134 elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
--> 135 pointer = getattr(pointer, "bias")
136 elif scope_names[0] == "output_weights":
137 pointer = getattr(pointer, "weight")
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
770 return modules[name]
771 raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
--> 772 type(self).__name__, name))
773
774 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
ModuleAttributeError: 'BertEncoder' object has no attribute 'bias'
```
I used both load_tf_weights_in_bert function and BertForPreTraining but it was not working.
My code:
```
config = BertConfig.from_json_file('bertvn_base/bertvn_base_config.json')
model = BertForPreTraining.from_pretrained('bertvn_base/model.ckpt', from_tf=True, config=config)
```
My config:
```
{"attention_probs_dropout_prob": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"embedding_size": 768,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 192,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_hidden_groups": 12,
"net_structure_type": 0,
"gap_size": 0,
"num_memory_blocks": 0,
"inner_group_num": 1,
"down_scale_factor": 1,
"type_vocab_size": 2,
"vocab_size": 120000
}
```
Thanks for your help!
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. --> | 10-26-2020 16:37:17 | 10-26-2020 16:37:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,054 | closed | Add m2m 100 multilingual translation model from FAIR | Weights, code are available.
+ Fairseq Code: https://github.com/pytorch/fairseq/tree/master/examples/m2m_100?fbclid=IwAR304kICXsffdDMogK4MWf4D7Xeu_3Cbmgu8pBCU_jKcjijCuJfLK7CY9_I
+ Paper: https://arxiv.org/abs/2010.11125
+ This model will not run on 1 V100 GPU, so model parallelism will be needed.
+ I would expect the state dict to be very similar to mBART, but not sure yet.
+ All I've done is download the state dict, run their command, and asked for help https://github.com/pytorch/fairseq/issues/2772#issuecomment-716152453 when it broke.
Leaving this unassigned in case somebody else wants to take over.
| 10-26-2020 16:17:38 | 10-26-2020 16:17:38 | If it helps, I managed to load the weights from M2M100 - 418M param model to Mbart
```
from transformers import MBartForConditionalGeneration, MBartConfig, AutoTokenizer, AutoModelForSeq2SeqLM
from fairseq import checkpoint_utils, options, tasks, utils
import torch
with open('418M_last_checkpoint.pt', 'rb') as f:
state = torch.load(f, map_location=torch.device("cpu"))
state = checkpoint_utils._upgrade_state_dict(state)
args = state['args']
args.fixed_dictionary = "model_dict.128k.txt"
args.source_lang = 'en'
args.target_lang = 'hi'
weights = state['model']
keys = [k for k in weights.keys()]
for key in keys:
if key.startswith('encoder.') or key.startswith('decoder.'):
new_key = 'model.' + key
weights[new_key] = weights[key]
del weights[key]
weights['model.shared.weight'] = weights['model.encoder.embed_tokens.weight']
config1 = MBartConfig(
activation_function='relu',
vocab_size=128112,
encoder_layerdrop=0.05,
decoder_layerdrop=0.05,
attention_dropout=0.1,
add_final_layer_norm=True,
normalize_before=True,
scale_embedding=True,
static_position_embeddings=True,
pad_token_id=1,
bos_token_id=0,
eos_token_id=2,
normalize_embedding=True,
use_cache=False
)
mbart1 = MBartForConditionalGeneration(config1)
mbart1.load_state_dict(weights, strict=False)
```
This is based on the checkpoint and dictionary provided [here](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100#418m-and-12b-model).
I also had to replace the position embeddings in `modeling_bart` with the [code from fairseq](https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sinusoidal_positional_embedding.py), because the fairseq implementation of the embeddings seems to be different form the one present in `modeling_bart`.
Although the weights load successfully it generates random tokens, albeit in the correct language. I have a feeling that there's something going on in fairseq's generate function that is not accounted for here, though I may be wrong.
Would greatly appreciate any ideas you might have to debug the generation aspect.
Hope this helps! Thanks!<|||||>This issue has been stale for 1 month.<|||||>`M2M100` is now integrated!
doc: https://huggingface.co/transformers/master/model_doc/m2m_100.html
models: https://huggingface.co/models?filter=m2m_100<|||||>> `M2M100` is now integrated!
>
> doc: https://huggingface.co/transformers/master/model_doc/m2m_100.html
> models: https://huggingface.co/models?filter=m2m_100
There is a problem with loading the model `model = M2M100ForConditionalGeneration.from_pretrained('facebook/m2m100_418M')`
produces OSError: Unable to load weights from pytorch checkpoint file for 'facebook/m2m100_418M' at '/root/.cache/huggingface/transformers/f9eabc2ccf1b4ddafac5c7f6dc837130ab7122d75ee98a64ed0a446a20b84871.53192defd013a2942c1d27b5842eba64b84d0e49943b0892c8f71967bf053029'
A manual download of pytorch_model.bin leads to a similar exception, as it produces a zip.<|||||>Hi @ciortanmadalina
I just tried this and can load the model successfully. This seems to be the issue with the cache, can you delete the cache and try again?<|||||>> Hi @ciortanmadalina
>
> I just tried this and can load the model successfully. This seems to be the issue with the cache, can you delete the cache and try again?
I soved it: the problem was not the cache but the pytorch version (1.4), which strangely enough, didn't raise a problem for the other transformer models I used (e.g. T5, Bert). Once I upgraded to 1.7, the issue was gone. Thanks for your answer! |
transformers | 8,053 | closed | Minor error fix of 'bart-large-cnn' details in the pretrained_models doc | I found that there seemed to be a mistake regarding "facebook/bart-large-cnn" in the pretrained_models doc.
# What does this PR do?
While I check the model explanations in the pretrained_models doc, I found that there seemed to be a mistake.
Regarding `facebook/bart-large-cnn`, the details of the model is as follows:
```
12-layer, 1024-hidden, 16-heads, 406M parameters (same as base)
bart-large base architecture finetuned on cnn summarization task
```
If my understanding is correct, it seems that `12-layer` and `(same as base)` should be `24-layer` and `(same as large)`.
I asked a question in the forum about this:
https://discuss.huggingface.co/t/there-seems-to-be-a-mistake-in-documentation-pretrained-models-html-regarding-bart/1746/
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
documentation: @sgugger
(Thank you for kindly answering my question in the forum!)
| 10-26-2020 14:54:37 | 10-26-2020 14:54:37 | Thanks!<|||||>Thank you, too! |
transformers | 8,052 | closed | Fix a bug for `CallbackHandler.callback_list` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix a bug where `CallbackHandler.callback_list` fails when given callbacks are duplicated:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-40-9605b122f4d1> in <module>()
2 from transformers.trainer import DEFAULT_CALLBACKS
3
----> 4 CallbackHandler(DEFAULT_CALLBACKS + [MLflowCallback], "model", "optimizer", "lr_scheduler")
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py in __init__(self, callbacks, model, optimizer, lr_scheduler)
277 self.callbacks = []
278 for cb in callbacks:
--> 279 self.add_callback(cb)
280 self.model = model
281 self.optimizer = optimizer
/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py in add_callback(self, callback)
299 f"You are adding a {cb_class} to the callbacks of this Trainer, but there is already one. The current"
300 + "list of callbacks is\n:"
--> 301 + self.callback_list
302 )
303 self.callbacks.append(cb)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py in callback_list(self)
326 @property
327 def callback_list(self):
--> 328 return "\n".join(self.callbacks)
329
330 def on_init_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl):
TypeError: sequence item 0: expected str instance, DefaultFlowCallback found
```
Code to reproduce the bug:
```python
from transformers.trainer_callback import CallbackHandler
from transformers.trainer import DEFAULT_CALLBACKS
CallbackHandler(DEFAULT_CALLBACKS + [MLflowCallback], "model", "optimizer", "lr_scheduler")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-26-2020 14:47:10 | 10-26-2020 14:47:10 | Do I need to add a test in `tests/test_trainer_callback.py` verifying that instantiating a trainer with duplicated callbacks doesn't fail?
```python
# this should not fail
trainer = self.get_trainer(
callbacks=[MyTestTrainerCallback, MyTestTrainerCallback],
)
```<|||||>@sgugger Thanks for the approval. I just added a test that verifies the following:
1. `Trainer` can be instantiated with duplicated callacks.
2. A warning is emitted for duplicated callbacks.
|
transformers | 8,051 | closed | minor model card description updates | # What does this PR do?
Makes a few minor updates to the [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) model card, such as removing the reference to the deprecated zero shot demo when the user can play with zero shot classification via the embedded widget. Also links to distilled bart models. | 10-26-2020 14:03:33 | 10-26-2020 14:03:33 | |
transformers | 8,050 | closed | invalid argument wwm passed to the run_language_modeling.py file |
# What does this PR do?
--wwm cant be used as an argument given run_language_modeling.py and should be changed to --whole_word_mask
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
@stefan-it | 10-26-2020 13:40:40 | 10-26-2020 13:40:40 | |
transformers | 8,049 | closed | Fix + Test | Fix an edge case of the blenderbot-90 tokenizer.
Closes #8029
# Context
If the blenderbot-90 tokenizer is used to tokenize the following sequence:
```py
sequence = "Ok ."
```
It will split it in two tokens at first:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L221
Those two tokens will be `['Ok', '.']`
The issue is that, when passed the second token, the `bpe` method will convert it from `'.'` to `' .'` here:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L160
This then gets split on spaces here:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L166
This is where the issue lies, as it creates two strings: `["", "."]`, the first one being empty.
It then crashes a bit further as we try to index the empty string:
https://github.com/huggingface/transformers/blob/8bbe8247f13057b7df1b2c9abbfacb05b30020bf/src/transformers/tokenization_blenderbot.py#L171
## Proposal
Ensure that the token has a length > 0 before trying to manage it, otherwise ignore that token.
Added a test.
| 10-26-2020 13:21:17 | 10-26-2020 13:21:17 | |
transformers | 8,048 | closed | Fix label name in DataCollatorForNextSentencePrediction test | # What does this PR do?
Labels have been renamed in `DataCollatorForNextSentencePrediction` to go with the fact `masked_lm_labels` is a deprecated argument, but the corresponding test was not adjusted accordingly, this PR fixes that.
Fixes #8034 | 10-26-2020 13:14:13 | 10-26-2020 13:14:13 | |
transformers | 8,047 | closed | [T5] Unused `n_positions` and `max_position_embeddings`. | The T5Config has the parameter `n_positions` set to 512 and `max_position_embeddigs` referring to `n_positions`. However, neither `max_position_embeddigs` nor `n_positions` is used in the `T5Model` and T5 is not limited to `max_position_embeddings`. *E.g.*:
```python
from transformers import T5Model
model = T5Model.from_pretrained("t5-small")
model.config.max_position_embeddings # shows 512
input_ids = torch.tensor([600 * [0]]) # input of size > 512
model(input_ids, decoder_input_ids=input_ids) # works fine
```
I think we should delete the parameter.
@thomwolf - do you remember why we added `max_position_embeddigs` and `n_positions` to T5? The model does not seem to use these params and also should not be limited to 512 due to its relative position embeddings. | 10-26-2020 12:20:57 | 10-26-2020 12:20:57 | |
transformers | 8,046 | closed | Minor typo fixes to the preprocessing tutorial in the docs | Minor typo fixes to the preprocessing tutorial in the docs
# What does this PR do?
Minor typo fixes to the tokenizer summary in the docs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| 10-26-2020 10:59:53 | 10-26-2020 10:59:53 | |
transformers | 8,045 | closed | Minor typo fixes to the tokenizer summary in the docs | Minor typo fixes to the tokenizer summary in the docs
# What does this PR do?
Minor typo fixes to the tokenizer summary in the docs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| 10-26-2020 10:59:21 | 10-26-2020 10:59:21 | |
transformers | 8,044 | closed | Automatically cuts input if the pipeline position_ids can't handle it. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently the failure is `index out of range in self`. (it fails in pytorch position_embeddings call).
The current PR aims to simply cut the input to the pipeline if the pipeline can't handle it with a warning to the user.
It feels better than the bad error message. As for triggering an error, it seems correct for the underlying model (it can't handle
the input). But, it seems a bit off to trigger the error for the pipeline as there is no way of knowing how to trim the input
for the user as he does not inputs tokens himself. Automatically cutting with a warning seems a bit better from
a usage standpoint.
This PR also improve a docstring which had a typo.
It also adds a new `PipelineWarning` because there are now 2 instances of such warnings and there should be more in the
future so enabling users to catch those warnings seems like a good idea.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
@lhoestq
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-26-2020 10:54:40 | 10-26-2020 10:54:40 | I just discovered that roberta does not respect max_embeddings_postion (it adds `padding_idx` to it) so the current fix actually does not really fix it for roberta. If t5 is another special case I fear the current approach is not adapted.
<|||||>> I just discovered that roberta does not respect max_embeddings_postion (it adds `padding_idx` to it) so the current fix actually does not really fix it for roberta. If t5 is another special case I fear the current approach is not adapted.
I think Roberta does respect `max_embeddings_positions` - why do you think it does not? Roberta is just ugly because `max_position_ids=514 != 512`, but your approach here should work for Roberta no? <|||||>You can check
```python
from transformers import pipeline
pipe = pipeline(task='sentiment-analysis', model='roberta-base-openai-detector')
pipe("Some.....very long text")
```
And it will fail. The reason is:
```python
def create_position_ids_from_input_ids(input_ids, padding_idx):
"""Replace non-padding symbols with their position numbers. Position numbers begin at
padding_idx+1. Padding symbols are ignored. This is modified from fairseq's
`utils.make_positions`.
:param torch.Tensor x:
:return torch.Tensor:
"""
# The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
mask = input_ids.ne(padding_idx).int()
incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
return incremental_indices.long() + padding_idx
```
As it mentions here, position_embeddings start at `padding_idx+1`. So 2, and finish at 514 (max_embeddings_position), but if we send a sequence of length `514` then the final position_embedding will be `516` (514 + 2) and so out of the embeddings available.
I'll look at T5 to look for a better approach<|||||>> ```python
> create_position_ids_from_input_ids
> ```
I see!
But this also actually looks like a bug in Roberta then. This function does not make much sense in combination with `max_position_embeddings=514`... I think Roberta's `max_position_embeddings` should actually be changed to `512`. @LysandreJik - can you take a look at this as well maybe?
Regarding T5, as written in the issue I think we should delete `max_position_embeddings`.<|||||>I think `T5` is ok, as it seems to use `n_positions` not `max_embedding_positions`, no ?
https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json
https://s3.amazonaws.com/models.huggingface.co/bert/t5-large-config.json<|||||>> I think `T5` is ok, as it seems to use `n_positions` not `max_embedding_positions`, no ?
>
> https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json
> https://s3.amazonaws.com/models.huggingface.co/bert/t5-large-config.json
The config defines `max_position_embeddings` as `n_positions` so it does have a `max_position_embeddings` param<|||||>Okay I changed the logic.
Instead of using `truncation='longest_first'`, I am not using it, but I am following the `deprecation_warning` flag to actually be able to re-emit a warning, otherwise it is silent which is a bit damageable I'd say.
There is still an issue, where the small model used in the pipeline tests `"sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english"` actually does not define properly a `tokenizer.model_max_length`, but I think it's more a configuration issue than a code issue.
What do you think? <|||||>It's a good idea to be able to re-emit the warnings, however, this dictionary exists so that these warnings are not re-emitted down the line. The point of this is that users only see this warning once and it doesn't flood their stdout.
This means that
```py
>>> from transformers import RobertaTokenizer
>>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
>>> tokenizer.encode("Hey how are you" * 1000)
Token indices sequence length is longer than the specified maximum sequence length for this model (4002 > 512). Running this sequence through the model will result in indexing errors
[0, ...]
>>> tokenizer.deprecation_warnings
{'sequence-length-is-longer-than-the-specified-maximum': True}
>>> tokenizer.encode("Hey how are you" * 10)
[0, ...]
>>> tokenizer.deprecation_warnings
{'sequence-length-is-longer-than-the-specified-maximum': True}
```
Checking against it means you will always re-emit the warning:
```py
>>> from transformers import pipeline
...
... pipe = pipeline(task='sentiment-analysis', model='roberta-base-openai-detector')
... pipe("Some.....very long text" * 1000)
[...]
Token indices sequence length is longer than the specified maximum sequence length for this model (5002 > 512). Running this sequence through the model will result in indexing errors
/home/jik/Workspaces/Python/transformers/src/transformers/pipelines.py:697: PipelineWarning: You input length was too long (5002 > 512) for this model and was truncated.
PipelineWarning,
>>> pipe("Some.....very long text")
/home/jik/Workspaces/Python/transformers/src/transformers/pipelines.py:697: PipelineWarning: You input length was too long (7 > 512) for this model and was truncated.
PipelineWarning,
[{'label': 'LABEL_0', 'score': 0.8550598621368408}]
```
Also while you're searching for better solutions, if you can use the `truncation` parameter that would be awesome. If you can't, no big deal, but I'd rather we use our own API if possible.<|||||>I am agree with the idea to not spam, the problem is that the current way of doing it is
```python
logger.warning(
"Token indices sequence length is longer than the specified maximum sequence length "
"for this model ({} > {}). Running this sequence through the model will result in "
"indexing errors".format(len(encoded_inputs["input_ids"]), self.model_max_length)
)
```
not
```python
warnings.warn(".....", UserWarning)
```
(which btw can be triggered once by setting a warning filter but it's not the purpose here).
The more I think about this issue, the more I think the current behavior might be the correct one for transformers.
- **Silent truncation is worse than raising an exception (ignoring what could be most of the input, is super dangerous for users, as it might output very wrong results).**
- Adding checks (at the model level) within the forward pass is too costly.
- `Re-emitting` a warning is bad if it's not trivial enough to do (don't want too much logic to handle those). Here we would need to capture (logger) before re-emitting (could be another logger or warnings, but it's still a bit much of work)
- At the pipeline level, we don't want to dive too deeply into model internals (like model_max_length and other configs), I feel at most config that is shared by *all* or *almost all* models (like vocab_size)
I'm in favor of closing this PR in the end.<|||||>Yes, the reason we use `logging` instead of `warnings` is to make use of the centralized logging system.
I agree with all your points, and I believe this is something that could be patched by allowing the users to pass kwargs to the tokenizer/model, as is something that should be enabled in pipelines v2.
Closing then, thank you for experimenting! |
transformers | 8,043 | closed | [Seq2Seq Trainer] Make sure padding is implemented for models without pad_token | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds padding for models without padding token as well. The logic is the following:
1) If model predicts `targets` < `max_length` => model has to have at least an `eos_token_id`. If model has no `config.pad_token_id` defined than the model simply uses the `config.eos_token_id` for padding.
2) If the model has no `config.eos_token_id`, => model cannot generate predictions shorter than `max_length`. In this case padding will never happen.
@sshleifer @patil-suraj - you guys were right -> the `Trainer` requires padding in any case (also if model has no padding token).
Could you guys review this PR and see if these fixes in Seq2Seq Trainer are ok for you?
| 10-26-2020 10:48:03 | 10-26-2020 10:48:03 | LGTM!<|||||>@patrickvonplaten , @sshleifer
I am seeing a major slowdown on TPU V3-8,
last time (9e68d075a4100906509170498480823e7e61874a) `sshleifer/student_marian_en_ro_6_3` finished 1 epoch in ~6 mins,
now on this branch it's showing ~1hr 20 mins<|||||>> @patrickvonplaten , @sshleifer
> I am seeing a major slowdown on TPU V3-8,
> last time ([9e68d07](https://github.com/huggingface/transformers/commit/9e68d075a4100906509170498480823e7e61874a)) `sshleifer/student_marian_en_ro_6_3` finished 1 epoch in ~6 mins,
> now on this branch it's showing ~1hr 20 mins
Ohoh :-/ can you narrow down the commit that caused the slow-down? I took a look again at https://github.com/huggingface/transformers/pull/7809/files and this line I added could be problematic `inputs = copy.deepcopy(inputs)`. <|||||>> > @patrickvonplaten , @sshleifer
> > I am seeing a major slowdown on TPU V3-8,
> > last time ([9e68d07](https://github.com/huggingface/transformers/commit/9e68d075a4100906509170498480823e7e61874a)) `sshleifer/student_marian_en_ro_6_3` finished 1 epoch in ~6 mins,
> > now on this branch it's showing ~1hr 20 mins
>
> Ohh, can you narrow down the commit that caused the slow-down? I took a look again at https://github.com/huggingface/transformers/pull/7809/files and this line I added could be problematic `inputs = copy.deepcopy(inputs)`.
Yeah this line is actually called at every step -> can you check whether removing the `copy` operation speeds the seq2seq trainer up again? I've been a bit sloppy there I think :-/ <|||||>It's still very slow even after removing that line. I'll try to find the exact commit which is causing this slowdown. |
transformers | 8,042 | closed | Tentative improvement on sequence_length error for position_ids | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently the failure is `index out of range in self`. (it fails in pytorch position_embeddings call).
The current PR aims to simply improve the error message by saying that the position_ids are simply too large.
It does add an `if` statement and a `max` call in a forward pass which is not so great
but it might help other users understand better the failure mode.
If this PR is desirable (which I am not sure it is because it has a non zero overhead), we should probably look at all
position_ids and copy the behavior at some point.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
@lhoestq
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-26-2020 10:16:31 | 10-26-2020 10:16:31 | I'm going to close this PR. I started it but ultimately feel it's not worth it.
The overhead is non zero for a simple error message, that won't bring that much value (most users can debug anyway, users that can't won't necessarily do a better job with the improved error message) |
transformers | 8,041 | closed | Create model cards for guwenbert | Add two model_cards: ethanyt/guwenbert-base and ethanyt/guwenbert-large | 10-26-2020 09:32:21 | 10-26-2020 09:32:21 | Cool, thanks for sharing! |
transformers | 8,040 | closed | Converting Transformers model to Tensorflow model | I am trying to convert `dbmdz/bert-base-turkish-uncased` to tensorflow model (`.pb`). It contains `tf_model.h5` file.
I tried to convert from `tf_model.h5` to tensorflow model. However, I couldn't handle it. Is `tf_model.h5` file keras model, isn’t it?
Is there any instruction to convert huggingface non-tf models to tensorflow model(`.pb`)? | 10-26-2020 09:00:23 | 10-26-2020 09:00:23 | I don't believe we have a way of converting keras models from .h5 to .pb. What is your use case, so that we can see if we may help further?<|||||>I want to improve the inference performance. I fine-tuned the pretrained model (`dbmdz/bert-base-turkish-uncased`). TensorRT can be helpful to reduce inference time. Use case is converting Keras model to TensorRT model. How can I achieve this?
P.S.: I am open to any advice. Tensorflow Lite or other things can be used. I just need some guidance.<|||||>@mfuntowicz I believe you have experience with using TensorRT with the `transformers` models. Do you have any idea of how to enable this?<|||||>You can try a couple of ways :
1. Keras -> Onnx -> TensorRT : You can use keras2onnx . You might have some operations that are not directly supported in onnx so you'll have to remove/edit those operations to convertible terms. Converting onnx to TRT can be done with a tool called "trtexec" or there are readymade scripts to do that. Do check model graph and accuracy at every step.
2. Keras -> Frozen graph(.pb) -> onnx -> TensorRT
You can also use TF-TRT which again optimizes but less optimized than TensorRT.
There is another way you can explore Keras -> UFF -> TensorRT . <|||||>Thank you for leading @zerocool95 <|||||> i also face this problem, and i find a exits onnx file on onnx github, and it said it was transfered from huggingface, but when i use trtexec to covert onnx ---> trt . it say it don;'t support some ops like 'NoZero' |
transformers | 8,039 | closed | Why the functions "add_special_tokens()" and "resize_token_embeddings()" hurt the performance of 'gpt2' and 'gpt2-medium' but not 'gpt2-large' and 'gpt2-xl' ? | # ❓ Questions & Help
## Details
When I use add_special_tokens and resize_token_embeddings to expand the vocabulary, the LM loss would become very large in gpt2 and gpt2-medium models (loaded by from_pretrained('gpt2') and from_pretrained('gpt2-medium)). But it don't happen when I load the gpt2-large and gpt2-xl models (also loaded by from_pretrained). Why?
Environment Info:
python 3.7.7
Linux 16.04
transformers 3.3.1
pytorch 1.6.0
Codes and results:
'''
import torch
from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel
device = torch.device('cuda:3')
input_sentence = 'who win this game?'
gpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
gpt2model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True)
gpt2model.to(device)
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(5.0102, device='cuda:3', grad_fn=<NllLossBackward>)
gpt2tokenizer.add_special_tokens({'additional_special_tokens': ['[first]', '[second]']})
gpt2model.resize_token_embeddings(len(gpt2tokenizer))
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(77.1513, device='cuda:3', grad_fn=<NllLossBackward>)
gpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
gpt2model = GPT2LMHeadModel.from_pretrained('gpt2-large', return_dict=True)
gpt2model.to(device)
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(5.1567, device='cuda:3', grad_fn=<NllLossBackward>)
gpt2tokenizer.add_special_tokens({'additional_special_tokens': ['[first]', '[second]']})
gpt2model.resize_token_embeddings(len(gpt2tokenizer))
input = gpt2tokenizer(input_sentence, return_tensors='pt').to(device)
outputs = gpt2model(**input, labels=input['input_ids'])
outputs.loss
tensor(5.1568, device='cuda:3', grad_fn=<NllLossBackward>)
''' | 10-26-2020 06:36:28 | 10-26-2020 06:36:28 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This behavior is caused by the fact that GPT2 is a model that ties the weights of the token embeddings and the language model head. For some settings of the pretrained weights, a new randomly initialized row of the language model head weight matrix might effectively unilaterally assign very high likelihood to the new word. This is what's happening for some pretrained models (but not others). A workaround is, after resizing the embeddings, set the new token embeddings to be the original unk token embedding, for example:
model.transformer.wte.weight[-1] = model.transformer.wte.weight[-2]
Alternatively, if you want to break the symmetry created by initializing with the exact same embedding, you can set the new embedding to be the average of all other embeddings, or add some noise when copying the existing unk token embedding. But just naively copying like above fixes the problem for GPT-2 (small).<|||||>@eric-mitchell You are right. Now I know how to solve my problem. Thanks for that.<|||||>Exactly same code with @eric-mitchell 's solution gives me error like:
`model.transformer.wte.weight[-1] = model.transformer.wte.weight[-2]`
> "RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation."
As far as i know, in-place operation is the operation that changes directly the content. But here I can't find any in-place operation. Does anyone know why this happens?<|||||>@jucho2725 I've just found a possible solution [here](https://stackoverflow.com/questions/49161652/how-to-get-around-in-place-operation-error-if-index-leaf-variable-for-gradient-u) <|||||>@Loreb92 Thanks for the help. How I solve the issue is the same as there.
Here is a code that I use. Hope this helps someone.
```
tokenizer = GPT2Tokenizer.from_pretrained(model_args.model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token # gpt2 does not have pad token at first.
special_tokens_dict = {
"additional_special_tokens": ['[ABC]', '[DEF]', '[GHI]'],
}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
unk_tok_emb = model.transformer.wte.weight.data[tokenizer.unk_token_id, :]
for i in range(num_added_toks):
model.transformer.wte.weight.data[-(i+1), :] = unk_tok_emb
```
|
transformers | 8,038 | closed | Model Card for Gujarati-XLM-R-Base | This PR adds the model card for the Gujarati-XLM-R-Base. | 10-26-2020 00:42:39 | 10-26-2020 00:42:39 | |
transformers | 8,037 | closed | RAG: Do we need to pretrained the doc-encoder when using a custom dataset? | Now the RAG consists of a [script](https://github.com/huggingface/transformers/blob/master/examples/rag/use_own_knowledge_dataset.py) where we can use a custom dataset other than the wiki-dataset.
Since, in the fine-tuning phase of the RAG, we do not update the doc-encoder (we update only BART and Question Encoder), what if our custom dataset consists of different distribution compared to the wiki dataset (Ex: medical records)?
Will it still work? | 10-25-2020 23:09:07 | 10-25-2020 23:09:07 | Hey @shamanez,
From the paper: https://arxiv.org/pdf/2005.11401.pdf (check part 2.2 - 2.5) it seems like the doc-encoder was never explicitely pre-trained for RAG, but the authors used a pre-trained retriever that "was trained to retrieve documents which
contain answers to TriviaQA [20] questions and Natural Questions [25]." => So you would have to see for yourself if the document encoder is sutiable for your task or not. If it is not suitable you will have to pre-train your own doc encoder and built the index using this document encoder.
It's a good question though we should probably also put it on the discussion forum: http://discuss.huggingface.co/.
Also pinging @lhoestq as he can probably provide you with a better answer than I can.
<|||||>+1 for Patrick, and I confirm that RAG uses the pretrained doc encoder from DPR.
It would be very interesting to see if the doc encoder handles well documents from specific domains.
Let us know if you plan to test that :)
If it doesn't work for your case you will probably need to train DPR on your dataset for retrieval before using RAG<|||||>@patrickvonplaten @lhoestq
Ok, I will give it a try and do a comparison for my task. Just need to clarify my pre-training pipeline for DPR in case I need to pre-train the doc encoder.
1. Pre-train the DPR using the FacebookAI [repository](https://github.com/facebookresearch/DPR).
2. Use the custom checkpoint and load it [here](https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L105) (Is there any conversion I need to do before this step)<|||||>@lhoestq
Sorry for spamming. Can you please let me know, whether I can directly use different DPR checkpoints that trained with Facebook repo? <|||||>> @patrickvonplaten @lhoestq
>
> Ok, I will give it a try and do a comparison for my task. Just need to clarify my pre-training pipeline for DPR in case I need to pre-train the doc encoder.
>
> 1. Pre-train the DPR using the FacebookAI [repository](https://github.com/facebookresearch/DPR).
> 2. Use the custom checkpoint and load it [here](https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L105) (Is there any conversion I need to do before this step)
Yes that's it.
To convert the DPR checkpoint from the original repo to transformers you can use the script `src/transformers/convert_dpr_original_checkpoint_to_pytorch.py`
<|||||>Perfect. Thanks for the clarification :).
On Thu, Oct 29, 2020, 22:32 Quentin Lhoest <[email protected]> wrote:
> @patrickvonplaten <https://github.com/patrickvonplaten> @lhoestq
> <https://github.com/lhoestq>
>
> Ok, I will give it a try and do a comparison for my task. Just need to
> clarify my pre-training pipeline for DPR in case I need to pre-train the
> doc encoder.
>
> 1. Pre-train the DPR using the FacebookAI repository
> <https://github.com/facebookresearch/DPR>.
> 2. Use the custom checkpoint and load it here
> <https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L105>
> (Is there any conversion I need to do before this step)
>
> Yes that's it.
> To convert the DPR checkpoint from the original repo to transformers you
> can use the script
> src/transformers/convert_dpr_original_checkpoint_to_pytorch.py
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/8037#issuecomment-718546578>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGQXQYM5S2CYMOQ3RWDSNEZCPANCNFSM4S6VBIUA>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,036 | closed | Add mixed precision evaluation | # What does this PR do?
Add flag and code to do mixed precision forward in trainer's `prediction_step` function.
Let evaluation (and prediction) to run faster.
## Who can review?
@sgugger
| 10-25-2020 18:55:54 | 10-25-2020 18:55:54 | I see your point; Apex will take care of the other case. Updated!<|||||>Thanks! |
transformers | 8,035 | closed | ModelUtilsTest.test_model_from_pretrained failiing on CUDA | Seems as though an `architecture` key is being added.
Not sure who to assign this to @LysandreJik.
```python
__________________ ModelUtilsTest.test_model_from_pretrained ___________________
[gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python
self = <tests.test_modeling_common.ModelUtilsTest testMethod=test_model_from_pretrained>
@slow
def test_model_from_pretrained(self):
for model_name in BERT_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
config = BertConfig.from_pretrained(model_name)
self.assertIsNotNone(config)
self.assertIsInstance(config, PretrainedConfig)
model = BertModel.from_pretrained(model_name)
model, loading_info = BertModel.from_pretrained(model_name, output_loading_info=True)
self.assertIsNotNone(model)
self.assertIsInstance(model, PreTrainedModel)
for value in loading_info.values():
self.assertEqual(len(value), 0)
config = BertConfig.from_pretrained(model_name, output_attentions=True, output_hidden_states=True)
model = BertModel.from_pretrained(model_name, output_attentions=True, output_hidden_states=True)
self.assertEqual(model.config.output_hidden_states, True)
> self.assertEqual(model.config, config)
E AssertionError: BertConfig {
E "_name_or_path": "bert-base-uncased",
E "a[518 chars]22
E }
E != BertConfig {
E "architectures": [
E "BertForMaskedLM"
E [478 chars]22
E }
tests/test_modeling_common.py:1092: AssertionError
```
https://github.com/huggingface/transformers/runs/1303479917?check_suite_focus=true | 10-25-2020 17:32:06 | 10-25-2020 17:32:06 | Assign me!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,034 | closed | DataCollatorIntegrationTest::test_nsp failing on GPU | @sgugger I believe you are the right person to tag?
```python
=================================== FAILURES ===================================
_____________________ DataCollatorIntegrationTest.test_nsp _____________________
[gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python
self = <tests.test_data_collator.DataCollatorIntegrationTest testMethod=test_nsp>
@slow
def test_nsp(self):
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
data_collator = DataCollatorForNextSentencePrediction(tokenizer)
dataset = TextDatasetForNextSentencePrediction(tokenizer, file_path=PATH_SAMPLE_TEXT, block_size=512)
examples = [dataset[i] for i in range(len(dataset))]
batch = data_collator(examples)
self.assertIsInstance(batch, dict)
# Since there are randomly generated false samples, the total number of samples is not fixed.
total_samples = batch["input_ids"].shape[0]
self.assertEqual(batch["input_ids"].shape, torch.Size((total_samples, 512)))
self.assertEqual(batch["token_type_ids"].shape, torch.Size((total_samples, 512)))
> self.assertEqual(batch["masked_lm_labels"].shape, torch.Size((total_samples, 512)))
E KeyError: 'masked_lm_labels'
```
https://github.com/huggingface/transformers/runs/1303479917?check_suite_focus=true | 10-25-2020 17:30:37 | 10-25-2020 17:30:37 | Think that should just be renamed `"labels"`. Will look tomorrow and fix. |
transformers | 8,033 | closed | [cleanup] pegasus,marian,mbart pytorch tests | + Cleans up 3 pytorch test files
+ Faster (num_beams=2) pegasus integration test.
| 10-25-2020 17:04:15 | 10-25-2020 17:04:15 | |
transformers | 8,032 | closed | Commit 121dd43 changes DialoGPT generation behavior | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.4.0-127-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes (1 TITAN-XP)
- Using distributed or parallel set-up in script?: no
### Who can help
@cccntu @patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): DialoGPT-large
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Checkout [121dd43](https://github.com/huggingface/transformers/commit/121dd43).
2. Run the DialoGPT "How to use" code given [here](https://huggingface.co/microsoft/DialoGPT-medium), but change `DialoGPT-medium` to `DialoGPT-large`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
3. For the user's first utterance, type "Hello, how are you?". I get this output:
```
>> User:Hello, how are you?
DialoGPT: 're you a fan of the show?
```
Note: this problem is still present in the current version of master (`5148f43`).
## Expected behavior
With the previous commit, `0c64b18`, I get this output:
```
>> User:Hello, how are you?
DialoGPT: I'm good, you?
```
## Possible cause
The issue seems to be related to the `<|endoftext|>` token, which is used at the end of every utterance. This is being regarded as a padding token, and thus it's attention-masked, which also seems to affect the position ids. | 10-25-2020 16:23:54 | 10-25-2020 16:23:54 | Hi @abisee , sorry for the inconvenience.
Even though you did not pass in attention mask, it is created here: (first 2 lines)
https://github.com/huggingface/transformers/blob/5148f433097915f30864bf0ca6090656fecefbb8/src/transformers/generation_utils.py#L352-L363
changing this
`chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)`
to
`chat_history_ids = model.generate(bot_input_ids, max_length=1000, )`
seems to solve the problem. (the pad_token_id will still be set to tokenizer.eos_token_id, but after attention_mask is set to all ones)
Here is how the bug can happen:
If someone tries to
* use eos_token_id in sentences
* and also sets pad_token_id=eos_token_id
* and attention mask is created like this (using positions of pad_token_id). (there is no problem when using tokenizer to create attention mask)
Don't have a better solution for now, will think about it.
@patrickvonplaten @LysandreJik What do you think?
Maybe generate() should not create attention mask for users, but this can break other code, too.<|||||>Thanks for the response @cccntu!
My understanding is that both GPT2 and DialoGPT were trained without a pad token; i.e. neither model has a pad token embedding. In that case, why does the DialoGPT example code contain `pad_token_id=tokenizer.eos_token_id`? What's the purpose of doing this, if a pad token isn't needed for generation, and the EOS token was never used as a pad token during training?<|||||>**For generation**, it seems that attention masks are created automatically (if there's an assigned pad token that appears in the input). See `GenerationMixin.generate()`:
```
# create attention mask if necessary
# TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140
if (attention_mask is None) and (pad_token_id is not None) and (pad_token_id in input_ids):
attention_mask = input_ids.ne(pad_token_id).long()
elif attention_mask is None:
attention_mask = input_ids.new_ones(input_ids.shape)
```
However **for training** (at least for GPT2 models), as far as I can tell, the attention mask is _not_ created automatically, even if there's an assigned pad token that appears in the input.
This seems like an unexpected discrepancy, and another reason to put the attention mask creation in the model's `forward` as proposed by @thomwolf in [PR 3140](https://github.com/huggingface/transformers/pull/3140).<|||||>That's a super interesting issue! Thanks for posting it here!
So in short, in order to be able to do batch_generation with GPT2 (or Beam Search), we have to use some kind of token as the `pad_token_id` in case one batch finishes early. We decided a while back that for GPT2 we will just use the `eos_token_id` as the `pad_token_id` in this case.
Just as you guys noticed the problem lies in `generate()` automatically creating the `attention_mask` and falsely assuming the `eos_token_id` is a `pad_token_id` .
IMO, it was a mistake to automatically create the `attention_mask` in `generate()` as it could lead to unexpected problems such as those!
I'm currently doing a big `generate()` refactor and in this refactor the problem should be solved (see comment in PR linked below).
I hope that I'll be able to merge the PR in ~1 week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,031 | closed | [fix] FSMT slow test uses lists instead of torch tensors | `test_match_encode_decode` is failing in test_tf mode because it depends on torch. This removes the dependency.
cc @stas00 | 10-25-2020 15:53:07 | 10-25-2020 15:53:07 | |
transformers | 8,030 | closed | Getting Hosted inference API working? | Trying to get Hosted inference API to work. Was following https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046 ... is below the correct YAML syntax?
pipeline:
-fill-mask
widget:
-text: "København er [mask] i Danmark."
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-25-2020 13:03:40 | 10-25-2020 13:03:40 | @longenbach `pipeline_tag` expects a single string, not an array of string.
Note that you wouldn't need any of the tags or pipeline_tag (they would be detected automatically) if your `config.json` contained:
```json
{
...
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert"
}
```
We'll try to make that clearer in a next iteration.<|||||>Nice!
<img width="749" alt="Screenshot 2020-10-26 at 09 38 41" src="https://user-images.githubusercontent.com/326577/97150874-27a15980-1745-11eb-8017-e809af886925.png">
<|||||>@julien-c it works 🤗 Thanks for the insight on the documentation. So you are saying we can avoid making a **model card** if you include that JSON chunk in your uploaded config.json file?
In case others find confusion with the **Hosted inference API**. Below is the YAML section of my model card that works:
```html
---
language: da
tags:
- bert
- masked-lm
- lm-head
license: cc-by-4.0
datasets:
- common_crawl
- wikipedia
pipeline_tag: fill-mask
widget:
- text: "København er [MASK] i Danmark."
---
```
|
transformers | 8,029 | closed | BlenderbotSmallTokenizer throws tuple index out of range error for stopword | Using transformers==3.4.0
Script used:
```
from transformers import BlenderbotSmallTokenizer, BlenderbotForConditionalGeneration
mname = 'facebook/blenderbot-90M'
tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname)
sentence = "."
tokenizer(sentence)['input_ids']
```
This throws `IndexError: tuple index out of range` | 10-25-2020 11:26:23 | 10-25-2020 11:26:23 | |
transformers | 8,028 | closed | [BUG] Unexpected overflowing_tokens in tokenizer.encode_plus | ### Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?):
- Using GPU in script?: True
- Using distributed or parallel set-up in script?:
### Who can help
tokenizers: @mfuntowicz
## Information
When I am using BERT tokenizer, I get unexpected `overflowing_tokens`. Here is a example code to reproduce:
## To reproduce
```
import torch
import transformers
from transformers import AutoTokenizer
import pdb
tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased')
subtoken_ids_sentence = [x for x in range(1000,1050)]
nr_sentence_parts += 1
encoded_inputs = tokenizer.encode_plus(subtoken_ids_sentence,
max_length=40,
stride=20,
return_overflowing_tokens=True,
truncation=True,
)
print(encoded_inputs['overflowing_tokens'])
```
The output is: `[1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1048, 1047, 1046, 1045, 1044, 1043, 1042, 1041, 1040, 1039, 1038]`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The expected behavior I want is:
`[1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049]`
The current output contains `[1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049]` and an additional reversed Tensor of `[1048, 1047, 1046, 1045, 1044, 1043, 1042, 1041, 1040, 1039, 1038]`, which I think is wrong.
When I dig into the code, I find that:
https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L2556-L2564
I wonder why there is a for loop in it and I think I need `truncation_strategy = TruncationStrategy.ONLY_FIRST`. However, I failed to turn the truncation_stractegy to `only_first` because the code here turn the truncation strategy to `longest_first`.
https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L1750-L1759
Can you give me any help?
<!-- A clear and concise description of what you would expect to happen. -->
| 10-25-2020 11:13:30 | 10-25-2020 11:13:30 | I confirm the issue. It was ok with transformers 3.0.0, but from 3.1.0 it is changed.<|||||>And the code:
https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L2558-L2571
looks bugged, despite above: `ids = ids[:-1]` should be `ids = ids[:-window_len]`.<|||||>Pinging @thomwolf <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,027 | closed | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset
module_path, hash = prepare_module(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path
output_path = get_from_cache(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache
raise ConnectionError(“Couldn’t reach {}”.format(url))
ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-25-2020 10:59:33 | 10-25-2020 10:59:33 | I got a same problem as you. I install the transformer from source but still has this problem. @sgugger <|||||>I had the same problem, but this is how I solved it
1、Access the address and download the file:
https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/bookcorpus/bookcorpus.py
2、Put the file in this position:

3、Then you can load the corpus normally. I hope it can help you
|
transformers | 8,026 | closed | [Model Card] new cross lingual sentence model for German and English | - add new model card
- adapted model cards of our other sentence embeddings | 10-25-2020 07:37:39 | 10-25-2020 07:37:39 | @julien-c The small subsequent adjustments are done.
I would be happy if it could be merged.
Thanks a lot
Philip |
transformers | 8,025 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-24-2020 23:44:05 | 10-24-2020 23:44:05 | we should probably validate model ids (to something like `[\w\d-_]{3,}`) @Pierrci, mind creating an issue for this? |
transformers | 8,024 | closed | T5 on multiple tasks | Dear Huggingface team,
I am looking for a code to pretrain T5 on multiple tasks, the best I could find is the code released with wrapper around huggingface T5 model in the original author's repo:
https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py
On line 308-310, they create tensorflow datasets I think, and pass it to huggingface model, I was wondering if I could ask for your help, if one wants to add data parallelism to this code to make it efficient in pytorch, could you help me how I can do it in pytorch? thanks a lot, and I appreciate your help.
| 10-24-2020 22:22:31 | 10-24-2020 22:22:31 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,023 | closed | Tiny TF Bart fixes | Tiny TF Bart fixes | 10-24-2020 16:56:29 | 10-24-2020 16:56:29 | |
transformers | 8,022 | closed | [test] tests/test_modeling_deberta.py breaks on pytorch-nightly | pytorch-nightly and pytorch-1.7-candidate break these:
```
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSER...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL AS...
```
looks like a bug in pytorch, right?
```
RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda- \
bld/pytorch_1603436966316/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
```
log:
```
================================================================ test session starts ================================================================
platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- /home/stas/anaconda3/envs/main-38/bin/python
cachedir: .pytest_cache
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: typeguard-2.10.0, forked-1.3.0, xdist-2.1.0, instafail-0.4.2
collecting ... 2020-10-24 09:21:07.169605: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
collected 1 item
tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model FAILED
________________________________________________________ DebertaModelTest.test_deberta_model ________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_deberta_model>
def test_deberta_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_deberta_model(*config_and_inputs)
tests/test_modeling_deberta.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_deberta.py:159: in create_and_check_deberta_model
sequence_output = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids)[0]
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:744: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-1.5951, -1.0046, 0.5641, ..., -0.4472, 0.0159, -0.1435],
[-1.3476, -0.1559, -1.3866, ..., 1.2... [ 0.5269, -0.0601, -0.4018, ..., -0.1616, 0.5335, -0.8894]]],
device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1,..., 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1603436966316/work/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
============================================================== short test summary info ==============================================================
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED...
=========================================================== 1 failed, 5 warnings in 4.22s ===========================================================
```
## Environment info
```
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201023 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
for pytorch-1.7 candidate I used:
https://download.pytorch.org/whl/test/cu102/torch-1.7.0-cp38-cp38-linux_x86_64.whl
----------------------
**update**: The release team https://github.com/pytorch/pytorch/issues/45592 has been notified of this issue on pytorch slack. Waiting to hear back from them. | 10-24-2020 16:21:26 | 10-24-2020 16:21:26 | I couldn't reproduce this against 1.6. When I run it against 1.7, I get:
/data/users/gchanan/transformers/src/transformers/modeling_deberta.py:574: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /opt/conda/conda-bld/pytorch_1601363278767/work/torch/csrc/autograd/variable.cpp:480.)<|||||>Here's a workaround: change
https://github.com/huggingface/transformers/blob/5148f433097915f30864bf0ca6090656fecefbb8/src/transformers/modeling_deberta.py#L574
to:
`query_layer = query_layer + self.transpose_for_scores(self.q_bias[None, None, :])`
i.e. make the modification out-of-place. It might be better to do what is in the warning and change the `split` to `unsafe_split`, but I haven't tested that.<|||||>Thank you very much, @gchanan! That solved the problem.<|||||>Also pasting a more details explanation from slack by @albanD:
> The warning that is raised just before tells you what the issue is and tells you that this won't be allowed soon as it can crash (as you have seen). Removing the inplace is the right fix here.
The reason here is that split and chunk were fixed to properly return view (and avoid silently wrong gradients). But Inplace on the result is not handled by the autograd and will soon raise an error.
We have left unsafe_split and unsafe_chunk (both in python and c++) if you need the old behavior while you fix your code to avoid the inplace. |
transformers | 8,021 | closed | [bart] SinusoidalPositionalEmbedding breaks under pytorch-nightly | pytorch-nightly breaks these:
```
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_odd_embed_dim - RuntimeError: a view of a leaf Variable that requires...
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_positional_emb_cache_logic - RuntimeError: a view of a leaf Variable ...
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_positional_emb_weights_against_marian - RuntimeError: a view of a lea...
F
```
```
================================================================ test session starts ================================================================
platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- /home/stas/anaconda3/envs/main-38/bin/python
cachedir: .pytest_cache
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: typeguard-2.10.0, forked-1.3.0, xdist-2.1.0, instafail-0.4.2
collecting ... 2020-10-24 09:14:35.276431: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
collected 1 item
tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_odd_embed_dim FAILED
_______________________________________________ TestSinusoidalPositionalEmbeddings.test_odd_embed_dim _______________________________________________
self = <tests.test_modeling_bart.TestSinusoidalPositionalEmbeddings testMethod=test_odd_embed_dim>
def test_odd_embed_dim(self):
with self.assertRaises(NotImplementedError):
SinusoidalPositionalEmbedding(num_positions=4, embedding_dim=5, padding_idx=0).to(torch_device)
# odd num_positions is allowed
> SinusoidalPositionalEmbedding(num_positions=5, embedding_dim=4, padding_idx=0).to(torch_device)
tests/test_modeling_bart.py:627:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/modeling_bart.py:1331: in __init__
self.weight = self._init_weight(self.weight)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
out = Parameter containing:
tensor([[ 0.1368, 2.6925, 1.3918, 0.7332],
[-1.2813, -0.3071, 1.0553, -0.4325],
...6445],
[-1.6619, -0.2872, 0.6869, 0.6489],
[-1.5226, 0.1161, -0.2026, 0.1853]], requires_grad=True)
@staticmethod
def _init_weight(out: nn.Parameter):
"""Identical to the XLM create_sinusoidal_embeddings except features are not interleaved.
The cos features are in the 2nd half of the vector. [dim // 2:]
"""
n_pos, dim = out.shape
position_enc = np.array(
[[pos / np.power(10000, 2 * (j // 2) / dim) for j in range(dim)] for pos in range(n_pos)]
)
> out[:, 0 : dim // 2] = torch.FloatTensor(np.sin(position_enc[:, 0::2])) # This line breaks for odd n_pos
E RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
src/transformers/modeling_bart.py:1342: RuntimeError
============================================================== short test summary info ==============================================================
FAILED tests/test_modeling_bart.py::TestSinusoidalPositionalEmbeddings::test_odd_embed_dim - RuntimeError: a view of a leaf Variable that requires...
=========================================================== 1 failed, 3 warnings in 3.01s ===========================================================
```
## Environment info
```
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201023 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | 10-24-2020 16:17:33 | 10-24-2020 16:17:33 | |
transformers | 8,020 | closed | sentencepiece 0.1.94 causing segmentation fault | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0 and 3.3.1
- Platform: Linux/Sagemaker
- Python version: 3.7
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
## Information
## To reproduce
Steps to reproduce the behavior:
1. `pip install transformers[torch]`
2. `from transformers.trainer import TrainingArguments, Trainer`
`import torch`
3. `torch.tensor([1,2,3])`
`transformers` 3.3.1 seg faults at step 3, `transformers` 3.4 seg faults at step 2.
## Expected behavior
No segmentation fault | 10-24-2020 15:37:29 | 10-24-2020 15:37:29 | This can be worked around, for anyone hitting this issue, by setting `sentencepiece==0.1.91` explicitly.<|||||>Maybe we could set `sentencepiece==0.1.91` in the setup.py to prevent this from happening, as we already had the issue with the 0.1.92.
Do you want to open a PR for that?<|||||>This should also be fixed by #8073 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,019 | closed | Colab can't import trim_batch for T5, anything changed in transformers.tokenization_utils? | from transformers.tokenization_utils import trim_batch
ImportError: cannot import name 'trim_batch'
Any solutions? Thanks a lot. | 10-24-2020 15:35:31 | 10-24-2020 15:35:31 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Had the same issue. Found a trim_batch implementation on the transformers repo, used it, didn't face any issues so far.
[Link to implementation](https://github.com/huggingface/transformers/blob/783d7d2629e97c5f0c5f9ef01b8c66410275c204/examples/research_projects/rag/utils_rag.py#L35)
Code for reference:
```python
def trim_batch(
input_ids,
pad_token_id,
attention_mask=None,
):
"""Remove columns that are populated exclusively by pad_token_id"""
keep_column_mask = input_ids.ne(pad_token_id).any(dim=0)
if attention_mask is None:
return input_ids[:, keep_column_mask]
else:
return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask])
``` |
transformers | 8,018 | closed | tutorial document | In the Translation section in https://huggingface.co/transformers/task_summary.html
"
Here is an example of doing translation using a model and a tokenizer. The process is the following:
1.Instantiate a tokenizer and a model from the checkpoint name. Summarization is usually done using an encoder-decoder model, such as Bart or T5.
2.Define the article that should be summarizaed.
3.Add the T5 specific prefix “translate English to German: “
4.Use the PreTrainedModel.generate() method to perform the translation.
"
1 and 2 seemed copies from Summarization section and not be modified accordingly.
| 10-24-2020 14:16:10 | 10-24-2020 14:16:10 | You're correct! @patrickvonplaten, git blame shows you as the author, want to fix?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,017 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-24-2020 12:27:58 | 10-24-2020 12:27:58 | |
transformers | 8,016 | closed | Mlflow integration callback | # What does this PR do?
This PR adds Trainer integration with [MLflow](https://mlflow.org/).
It is implemented in roughly the same way as other integration callbacks (CometML, wandb) and gets added to the list of Trainer callbacks automatically when mlflow is installed. All the mlflow parameters are configured with env variables, as described in the library documentation. This PR adds an additional environment variable, `HF_MLFLOW_LOG_ARTIFACTS`, which controls whether to use mlflow artifact logging facility to save artifacts generated after training (it doesn't make much sense if mlflow is used locally).
Fixes #7698
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-24-2020 12:18:20 | 10-24-2020 12:18:20 | |
transformers | 8,015 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-24-2020 12:17:07 | 10-24-2020 12:17:07 | |
transformers | 8,014 | closed | weird output shape when fine-tuning TFDistilBertForSequenceClassification | I'm trying to fine-tune `TFDistilBertForSequenceClassification` for multi-class classification (100 classes) on a custom dataset following the tutorial at https://huggingface.co/transformers/custom_datasets.html.
I'm following the workflow for fine-tuning in native tensorflow, i.e.:
```
from transformers import TFDistilBertForSequenceClassification
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)
```
Everything seems to go fine during fine-tuning, but when I try to predict on the test dataset (2000 samples) using `model.predict(test_dataset)`, I get an output with weird shape.
That is, instead of getting an output of shape (1, 2000, 100), I get one with shape (1, 1024000, 100), where 1024000 happens to be number of test examples (2000) times the sequence length (512).
Any hint on what's going on here? Sorry if it's a naïve mistake on my side, I'm new to tf. | 10-24-2020 12:13:16 | 10-24-2020 12:13:16 | One probably related issue is that if I try to use `model.evaluate` I get an error with the following traceback:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-125-3f03cbe29a62> in <module>
----> 1 model.evaluate(test_dataset)
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict)
1377 with trace.Trace('TraceContext', graph_type='test', step_num=step):
1378 callbacks.on_test_batch_begin(step)
-> 1379 tmp_logs = test_function(iterator)
1380 if data_handler.should_sync:
1381 context.async_wait()
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
778 else:
779 compiler = "nonXla"
--> 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
821 # This is the first call of __call__, so we have to initialize.
822 initializers = []
--> 823 self._initialize(args, kwds, add_initializers_to=initializers)
824 finally:
825 # At this point we know that the initialization is complete (or less
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
695 self._concrete_stateful_fn = (
696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 697 *args, **kwds))
698
699 def invalid_creator_scope(*unused_args, **unused_kwds):
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2853 args, kwargs = None, None
2854 with self._lock:
-> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2856 return graph_function
2857
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3211
3212 self._function_cache.missed.add(call_context_key)
-> 3213 graph_function = self._create_graph_function(args, kwargs)
3214 self._function_cache.primary[cache_key] = graph_function
3215 return graph_function, args, kwargs
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3073 arg_names=arg_names,
3074 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3075 capture_by_value=self._capture_by_value),
3076 self._function_attributes,
3077 function_spec=self.function_spec,
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
984 _, original_func = tf_decorator.unwrap(python_func)
985
--> 986 func_outputs = python_func(*func_args, **func_kwargs)
987
988 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
598 # __wrapped__ allows AutoGraph to swap in a converted function. We give
599 # the function a weak reference to itself to avoid a reference cycle.
--> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds)
601 weak_wrapped_fn = weakref.ref(wrapped_fn)
602
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1224 test_function *
return step_function(self, iterator)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:142 compute_loss *
return loss_fn(labels, logits)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:149 __call__ **
losses = ag_call(y_true, y_pred)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:253 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:1567 sparse_categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/backend.py:4783 sparse_categorical_crossentropy
labels=target, logits=output)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py:4175 sparse_softmax_cross_entropy_with_logits_v2
labels=labels, logits=logits, name=name)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/rr48396/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py:4090 sparse_softmax_cross_entropy_with_logits
logits.get_shape()))
ValueError: Shape mismatch: The shape of labels (received (1,)) should equal the shape of logits except for the last dimension (received (512, 100)).
```
Shouldn't be an error related to my dataset, as it's constructed the same way as in the tutorial...<|||||>never mind, I didn't realize the model requires manually batching the dataset at prediction. closing :) |
transformers | 8,013 | closed | [doc prepare_seq2seq_batch] fix docs | The `prepare_seq2seq_batch` batch returns `[input_ids, attention_mask, labels]` and not `[input_ids, attention_mask, decoder_input_ids]`. This PR fixes the docs accordingly.
@sshleifer | 10-24-2020 07:39:20 | 10-24-2020 07:39:20 | |
transformers | 8,012 | closed | Add model_cards for DynaBERT | Add model_cards for DynaBERT_MNLI and DynaBERT_SST-2. | 10-24-2020 07:24:07 | 10-24-2020 07:24:07 | |
transformers | 8,011 | closed | AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects' | Problem and solution:
This happened several times as of recent. Some env gets messed up and I end up with most tests failing with:
```____________________________________________________ ERROR collecting tests/test_benchmark_tf.py ____________________________________________________
tests/test_benchmark_tf.py:6: in <module>
from transformers import AutoConfig, is_tf_available
src/transformers/__init__.py:22: in <module>
from .integrations import ( # isort:skip
src/transformers/integrations.py:58: in <module>
from .file_utils import is_torch_tpu_available
src/transformers/file_utils.py:59: in <module>
import tensorflow as tf
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/__init__.py:41: in <module>
from tensorflow.python.tools import module_util as _module_util
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/__init__.py:84: in <module>
from tensorflow.python import keras
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/__init__.py:27: in <module>
from tensorflow.python.keras import models
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/models.py:24: in <module>
from tensorflow.python.keras import metrics as metrics_module
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:37: in <module>
from tensorflow.python.keras.engine import base_layer
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:51: in <module>
from tensorflow.python.keras import initializers
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/initializers/__init__.py:127: in <module>
populate_deserializable_objects()
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorflow/python/keras/initializers/__init__.py:85: in populate_deserializable_objects
generic_utils.populate_dict_with_module_objects(
E AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
collected 0 items / 1 error
```
I think it's conda installing a broken tensorflow, I'm not 100% sure - see the solution in the next comment.
| 10-24-2020 06:53:12 | 10-24-2020 06:53:12 | ### Solution
If you encounter the same, here is how to fix it:
```
pip uninstall -y tensorflow-gpu tensorflow
pip install tensorflow-gpu -U
```
(I assume you want the gpu version - adjust if not)
<|||||>I think that for the last few versions, when installing `tensorflow` you get a `tensorflow` that can use your GPU out of the box, so there's no need to play with `tensorflow-gpu`/`tensorflow`!<|||||>I think it's another package's dependency pulling in `tensorflow-gpu` - e.g. I see: `wandb/requirements.txt:tensorflow-gpu==2.3.1`<|||||>In my case, after `conda install tensorflow-gpu` - to install `tensorflow` version 2.2,
I then tried `pip install autokeras` (because conda does not have this package).
`pip` would install **_another_** `tensorflow` (in this case, version 2.3).
And this is when the problem happened.
```
>>> import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 84, in <module>
from tensorflow.python import keras
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/__init__.py", line 27, in <module>
from tensorflow.python.keras import models
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/models.py", line 24, in <module>
from tensorflow.python.keras import metrics as metrics_module
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/metrics.py", line 37, in <module>
from tensorflow.python.keras.engine import base_layer
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 51, in <module>
from tensorflow.python.keras import initializers
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/initializers/__init__.py", line 127, in <module>
populate_deserializable_objects()
File "/home/longnv/.conda/envs/testENV/lib/python3.6/site-packages/tensorflow/python/keras/initializers/__init__.py", line 85, in populate_deserializable_objects
generic_utils.populate_dict_with_module_objects(
AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
>>> conda uninstall keras
```
It seems like we have similar error.
I am not sure what you have done with your env but your solution worked in my case too, obviously.<|||||>Please add the snippet of the code into the program of general_utils.py
**1. Find out your path**
For me, the path is listed as follows:
/home/user/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py
**2. Paste the code into the program of generic_utils.py**
```
def populate_dict_with_module_objects(target_dict, modules, obj_filter):
for module in modules:
for name in dir(module):
obj = getattr(module, name)
if obj_filter(obj):
target_dict[name] = obj
```
**3. Initialize the dev tool**
It will be working after initializing the dev tool (it is Terminal for me in Linux) |
transformers | 8,010 | closed | src->transformers->generation_tf_util.py ->_generate_beam_search->outputs = self(**model_inputs) why self ?There is not a function? | Traceback (most recent call last):
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\pydevd.py", line 1741, in <module>
main()
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Program Files\JetBrains\PyCharm 2018.3.2\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/Github_project/transformers-master/src/transformers/generation_tf_utils.py", line 1128, in <module>
use_cache = None,
File "D:/Github_project/transformers-master/src/transformers/generation_tf_utils.py", line 625, in _generate_beam_search
outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size)
TypeError: 'TFGenerationMixin' object is not callable | 10-24-2020 02:30:18 | 10-24-2020 02:30:18 | is self.generate?<|||||>Hello! Could you provide all the information relative to your environment as asked in the template, as well as the code that generates the error? Thanks!<|||||>Thanks for your replying!
outputs = self(**model_inputs)
This code appear in function _generate_beam_search and _generate_no_beam_search.
The 'self' is instance of class, it is not callable, so it can be used like this? There is not a function name like self.generate?
Thus, i just test the genaration_tf_util.py, so i add the test code like this. And i delete unrelated code just for test this before 'while cur_len < max_length: model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache
)':
My environment is:
windows
tensorflow 2.0.0rc0
numpy 1.17.2
```
# my add
# if __name__ == '__main__':
# a = TFGenerationMixin()
# a._generate_beam_search(input_ids = None,
# cur_len = 10,
# max_length = 100,
# min_length = 5,
# do_sample = None,
# early_stopping = None,
# # num_beams = None,
# temperature = None,
# top_k = None,
# top_p = None,
# repetition_penalty = None,
# no_repeat_ngram_size=None,
# bad_words_ids = None,
# # bos_token_id = None,
# pad_token_id = None,
# eos_token_id = None,
# batch_size=4,
# num_return_sequences=None,
# length_penalty = None,
# num_beams=None,
# vocab_size=None,
# # no_repeat_ngram_size = None,
# # num_return_sequences = None,
# encoder_outputs=None,
# attention_mask = None,
# # decoder_start_token_id = None,
# use_cache = None,
# )
```
> Hello! Could you provide all the information relative to your environment as asked in the template, as well as the code that generates the error? Thanks!
<|||||>The `self` is the instance of the class, calling it as `self(...)` results in calling the `__call__` method.<|||||>see https://www.geeksforgeeks.org/callable-in-python/<|||||>> The `self` is the instance of the class, calling it as `self(...)` results in calling the `__call__` method.
But where is __call__ method of class TFGenerationMixin?<|||||>Ah, my bad, I hadn't taken a close enough look at your code. You can't initialize a `TFGenerationMixin` like this, as it is an abstract class. It is there to have TF classes inherit from that abstract class, not to be used as-is.
Could you tell me what you're trying to do so I could guide you towards the classes you should use?<|||||>TFPreTrainedModel inherit from TFGenerationMixin, and TFPreTrainedModel is base class for all TF models. Some TF models have __call__ method, leading to self(..) in TFGenerationMixin can be callable, that's right?
This question appear just when I read the code, not for do something.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,009 | closed | Doc styling | # What does this PR do?
Add a script that does some styling on the doc files and docstrings.
| 10-23-2020 21:14:11 | 10-23-2020 21:14:11 | |
transformers | 8,008 | closed | TextDataset bug with big files | - `transformers` version: 3.0.2
- Platform: Linux-4.4.0-137-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
The model I am using (Bert, XLNet ...): XLMRobertaTokenizer
The problem arises when using:
* [ x ] my own modified script: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following code
```Python
from transformers import XLMRobertaTokenizer, TextDataset
max_pos = 4096
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base", model_max_length=max_pos)
tokenizer.model_max_length = max_pos
tokenizer.init_kwargs["model_max_length"] = max_pos
train_datapath = "path/to/train.raw"
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path=train_datapath,
block_size=tokenizer.max_len
)
```
## Error Messages
```
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
```
## Expected behavior
My file `train.raw` contains 349912098 words with 2,3G. When I try with a small dataset (262703214 words and 251M), it works fine. I have modified this line in the TextDataset class https://github.com/huggingface/transformers/blob/a16e568f22a4d07813ba76343309ec20096115a5/src/transformers/data/datasets/language_modeling.py#L68 to understand where is the problem. It happens in the `tokenizer.tokenize(text)` part. I have changed it not to tokenize the entire text directly but to process chunks of the text each time, concatenating the result in a final list. Although memory hungry, this method works fine (and memory is not my problem).
**Note:** I have executed this script in a machine with 504G of RAM, and the script used approx. 36G when it died.
| 10-23-2020 20:40:11 | 10-23-2020 20:40:11 | Hi! Could you put the whole stack trace here? I fear it might be an internal sentencepiece error, for which we'll be unable to help and you would have more luck opening an issue on the sentencepiece repo directly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,007 | closed | Ci test tf super slow | Enable the slow TF suite on GPU. Below are the current failing tests.
Investigating whether they're actually failing or if it's for some other reason.
- [x] ~FAILED tests/test_modeling_marian.py::ModelManagementTests::test_model_names~
- [x] ~FAILED tests/test_modeling_prophetnet.py::ProphetNetModelIntegrationTest::test_cnndm_inference~
- [x] ~FAILED tests/test_modeling_prophetnet.py::ProphetNetModelIntegrationTest::test_pretrained_checkpoint_hidden_states~
- [x] ~FAILED tests/test_modeling_prophetnet.py::ProphetNetModelIntegrationTest::test_question_gen_inference~
- [x] ~FAILED tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_classification_head~
- [x] ~FAILED tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_masked_lm~
- [x] ~FAILED tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_no_head~
- [x] ~FAILED tests/test_modeling_squeezebert.py::SqueezeBertModelIntegrationTest::test_inference_classification_head~
- [x] ~FAILED tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_camembert.py::TFCamembertModelIntegrationTest::test_output_embeds_base_model~
- [x] ~FAILED tests/test_modeling_tf_electra.py::TFElectraModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_electra.py::TFElectraModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_saved_model_with_hidden_states_output~
- [x] FAILED tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_saved_model_with_attentions_output
- [x] ~FAILED tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_with_attentions_output~
- [x] ~FAILED tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_with_hidden_states_output~
- [x] ~FAILED tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_from_pretrained~
- [x] FAILED tests/test_modeling_tf_t5.py::TFT5ModelTest::test_saved_model_with_attentions_output
- [x] FAILED tests/test_modeling_tf_t5.py::TFT5ModelTest::test_saved_model_with_hidden_states_output
- [x] ~FAILED tests/test_modeling_tf_xlm_roberta.py::TFFlaubertModelIntegrationTest::test_output_embeds_base_model~
- [x] ~FAILED tests/test_modeling_tf_xlnet.py::TFXLNetModelLanguageGenerationTest::test_lm_generate_xlnet_base_cased~
- [x] ~FAILED tests/test_modeling_transfo_xl.py::TransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103~
- [x] ~FAILED tests/test_modeling_xlm_prophetnet.py::XLMProphetNetModelIntegrationTest::test_ntg_hidden_states~
- [x] ~FAILED tests/test_modeling_xlm_prophetnet.py::XLMProphetNetModelIntegrationTest::test_pretrained_checkpoint_hidden_states~
- [x] ~FAILED tests/test_modeling_xlm_prophetnet.py::XLMProphetNetModelIntegrationTest::test_xprophetnet_ntg_inference~
- [x] ~FAILED tests/test_modeling_xlm_roberta.py::XLMRobertaModelIntegrationTest::test_xlm_roberta_base~
- [x] ~FAILED tests/test_modeling_xlm_roberta.py::XLMRobertaModelIntegrationTest::test_xlm_roberta_large~
- [x] FAILED tests/test_pipelines.py::PipelineCommonTests::test_tf_defaults - Value...
- [x] ~FAILED tests/test_tokenization_fsmt.py::FSMTTokenizationTest::test_match_encode_decode~
And while we're at it here are the remaining failing tests in PyTorch slow multi-gpu tests:
- [x] FAILED tests/test_data_collator.py::DataCollatorIntegrationTest::test_nsp - K...
- [x] FAILED tests/test_modeling_common.py::ModelUtilsTest::test_model_from_pretrained
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_sequence_generate_batch~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_sequence_generate_beam~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_batch~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_beam~
- [x] ~FAILED tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_inference~
- [x] FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_integration_torch_conversation
- [x] FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_integration_torch_conversation_truncated_history
- [x] FAILED tests/test_pipelines.py::DialoguePipelineTests::test_torch_conversation
- [x] FAILED tests/test_pipelines.py::PipelineCommonTests::test_pt_defaults - Value...
The RAG integration tests seem to happen because of OOM errors, deactivated them in a multi-gpu setup as the GPUs have less memory.
Status:
Done! Waiting for the green tests to merge. | 10-23-2020 20:12:28 | 10-23-2020 20:12:28 | `tests/test_tokenization_fsmt.py::FSMTTokenizationTest::test_match_encode_decode` fixed in https://github.com/huggingface/transformers/pull/8031
I checked it off in your list.<|||||>@patrickvonplaten can you take a look at the failing TF-Longformer and TF-T5 tests?<|||||>> @patrickvonplaten can you take a look at the failing TF-Longformer and TF-T5 tests?
Will take a look at the TF-Longformer Test :-) I tried a bit unsuccessfully to debug the `TF-T5` test for 2h a week ago and I didn't manage to get rid of `cast_bool_....` for `TFT5` at the moment. I think the hacky `cast_bool_...` function is the reason for the T5 Test Failure. Not sure if it's worth it spending a lot of time here again. :-/ I could comment out the test for now? Or @jplu @LysandreJik do you have a nice TF insight to solve it? <|||||>For now I propose to comment them out. I will do a pass over it later in a couple of weeks. I planned to go through each TF model anyway.<|||||>I commented both TF T5 tests out ATM, same for the longformer. I think the Longformer test can be fixed, but the T5 tests cannot, at least not without introducing breaking changes.
I think that we would need to have an additional layer between the `TFT5Model` and the encoder/decoder models, same as we do with every other model (the xxxMainLayer), otherwise we won't be able to use the saved model. Even with the `cast_bool_....` they're not currently usable as saved models.
It's the same issue with BART, and why those two tests are commented in BART as well imo.<|||||>All tests are passing now. The only error on the scheduled is because of something on the hub with one tiny models, which I'm fixing right now. |
transformers | 8,006 | closed | [tokenizers] Fixing #8001 - Adding tests on tokenizers serialization | # What does this PR do?
Fixes #8001
Now the tokenizers classes have to send all the keyword arguments of the `__init__` up to the base class of the tokenizer (by `super().__init__`) were they are stored in `init_kwargs` for serialized saving/reloading with `save_pretrained/from_pretrained`.
Adding a test on tokenizers serialization that all the keyword arguments of the `__init__` are found in the saved `init_kwargs` to avoid forgetting to send some arguments up in future (and current) tokenizers.
Make T5 tokenizer serialization more robust.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-23-2020 15:49:26 | 10-23-2020 15:49:26 | |
transformers | 8,005 | closed | Differences between facebook/bart-base and facebook/bart-large? | # ❓ Questions & Help
Is there some more difference between `facebook/bart-base` and `facebook/bart-large` (other than dimensions, heads and layers)?
## Who can help
@sshleifer @WiseDoge
## Environment info
- transformers version: 3.3.1
- Python version: 3.6.12
- PyTorch version (GPU?): 1.4.0 GPU-version
## Command:
I'm using the seq2seq/finetune.py script to finetune both BARTs.
```
python finetune.py \
--data_dir=${DATA_DIR} \
--learning_rate=3e-5 \
--num_train_epochs 5 \
--task summarization \
--model_name_or_path=${MODEL} \
--train_batch_size=4 \
--eval_batch_size=4 \
--gpus 1 \
--output_dir=$OUTPUT_DIR \
--max_source_length=256 \
--max_target_length=256 \
--val_max_target_length=256 \
--test_max_target_length=256 \
--eval_max_gen_length=256 \
--do_train --do_predict \
--eval_beams 5
```
${MODEL} model can be `facebook/bart-base` or `facebook/bart-large`
## Details
When I finetune facebook/bart-base, it works well:
```
"input_ids": " <s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s><s> Russian troops reported to be stationed in the 3 Baltic Sea countries of Jalininggele, Simolingsike and Yelinia 300 kilometers (110 miles) from Moscow.</s><pad><pad><pad><pad><pad><pad><pad>"
```
When I finetune facebook/bart-large, it did not generate a reasonable output:
```
"input_ids": "<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s><s><s><s><s><s><s><s><s><s><s> ... <s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>"
```
I'm using the same code, but only `facebook/bart-base` model works. In a previous transformer version, both worked, but not in this one (3.3.1).
| 10-23-2020 15:26:39 | 10-23-2020 15:26:39 | If you look at
https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large/config.json
and
https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-base/config.json
(how to do this for any model: go to [model hub](https://s3.amazonaws.com/models.huggingface.co/) and click see raw config file)
you will see different `task_specific_params`. These are used for fine-tuning by default so bart-large
is forced to generate at least 56 tokens.
There are many ways to fix. Easiest is to comment out this line https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L66
More involved would be to make a local copy of the config and insert the generation parameters you want. You can pass it to finetune.py with `--config_name`.
I will think about how to update bart-base and bart-large to have more reasonable task_specific_params.<|||||>cc @patil-suraj @stas00 @patrickvonplaten for awareness of a very sneaky bug.<|||||>@sshleifer , thank you very much for your reply. Indeed, I have checked those configurations. So I changed the parameters for the `generate` method to consider min_length=0:
```
generated_ids = self.model.generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
use_cache=True,
decoder_start_token_id=self.decoder_start_token_id,
num_beams=self.eval_beams,
no_repeat_ngram_size=0,
min_length=0,
max_length=self.eval_max_length,
length_penalty=1.0
)
```
I used this code for both `facebook/bart-base` and `facebook/bart-large`. And the outputs for `bart-large` are as I mentioned. I have been trying to figure out the reason in the last days without success. Maybe I'm doing some wrong, but I could not discover what it is yet.
Another point is that the generation for `bart-large` is much slower than `bart-base`. Maybe it is because the model is generating tokens until the limit (max_length).<|||||>How did you call `generate` to produce the outputs in your Issue Description?
Your change to finetune.py will not change the config.
<|||||>This is my `_generative_step` method:
```
def _generative_step(self, batch: dict) -> dict:
t0 = time.time()
generated_ids = self.model.generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
use_cache=True,
decoder_start_token_id=self.decoder_start_token_id,
num_beams=self.eval_beams,
no_repeat_ngram_size=0,
min_length=0,
max_length=self.eval_max_length,
length_penalty=1.0
)
gen_time = (time.time() - t0) / batch["input_ids"].shape[0]
preds: List[str] = self.ids_to_clean_text(generated_ids)
target: List[str] = self.ids_to_clean_text(batch["labels"])
a = self.tokenizer.batch_decode(batch["input_ids"].tolist())
b = self.tokenizer.batch_decode(batch["labels"].tolist())
c = self.tokenizer.batch_decode(generated_ids)
pad_token_id = self.tokenizer.pad_token_id
tgt_ids = batch["labels"]
if isinstance(self.model, T5ForConditionalGeneration):
decoder_input_ids = self.model._shift_right(tgt_ids)
else:
decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)
e = self.tokenizer.batch_decode(decoder_input_ids.tolist())
loss_tensors = self._step(batch)
base_metrics = {name: loss for name, loss in zip(self.loss_names, loss_tensors)}
rouge: Dict = self.calc_generative_metrics(preds, target)
summ_len = np.mean(lmap(len, generated_ids))
base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=target, a=a, b=b, c=c, e=e, **rouge)
return base_metrics
```
`_step` method:
```
def _step(self, batch: dict) -> Tuple:
pad_token_id = self.tokenizer.pad_token_id
src_ids, src_mask = batch["input_ids"], batch["attention_mask"]
tgt_ids = batch["labels"]
if isinstance(self.model, T5ForConditionalGeneration):
decoder_input_ids = self.model._shift_right(tgt_ids)
else:
decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)
if not self.already_saved_batch: # This would be slightly better if it only happened on rank zero
batch["decoder_input_ids"] = decoder_input_ids
self.save_readable_batch(batch)
outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)
lm_logits = outputs[0]
if self.hparams.label_smoothing == 0:
# Same behavior as modeling_bart.py, besides ignoring pad_token_id
ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)
assert lm_logits.shape[-1] == self.vocab_size
loss = ce_loss_fct(lm_logits.view(-1, lm_logits.shape[-1]), tgt_ids.view(-1))
else:
lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)
loss, nll_loss = label_smoothed_nll_loss(
lprobs, tgt_ids, self.hparams.label_smoothing, ignore_index=pad_token_id
)
return (loss,)
```
This is my validation_epoch_end:
```
def validation_epoch_end(self, outputs, prefix="val") -> Dict:
self.step_count += 1
losses = {k: torch.stack([x[k] for x in outputs]).mean() for k in self.loss_names}
loss = losses["loss"]
generative_metrics = {
k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
}
metric_val = (
generative_metrics[self.val_metric] if self.val_metric in generative_metrics else losses[self.val_metric]
)
metric_tensor: torch.FloatTensor = torch.tensor(metric_val).type_as(loss)
generative_metrics.update({k: v.item() for k, v in losses.items()})
losses.update(generative_metrics)
all_metrics = {f"{prefix}_avg_{k}": x for k, x in losses.items()}
all_metrics["step_count"] = self.step_count
self.metrics[prefix].append(all_metrics) # callback writes this to self.metrics_save_path
preds = flatten_list([x["preds"] for x in outputs])
val_outputs_folder = "val_outputs"
os.system("mkdir -p " + os.path.join(self.hparams.output_dir, val_outputs_folder))
if "preds" in outputs[0]:
tb_all = {}
idx_tb = 0
for output_batch in outputs:
a,b,c,e = output_batch["a"], output_batch["b"], output_batch["c"], output_batch["e"]
for aa,bb,ee,cc in zip(a,b,e,c):
tb_all[idx_tb] = {}
tb_all[idx_tb]['input_ids'] = aa
tb_all[idx_tb]['labels'] = bb
tb_all[idx_tb]['decoder_input_ids'] = ee
tb_all[idx_tb]['generated_ids'] = cc
idx_tb += 1
file_debug = os.path.join(self.hparams.output_dir, val_outputs_folder,
"debug_" +
str(self.step_count) + ".json")
save_json(tb_all, file_debug)
return {
"log": all_metrics,
"preds": preds,
f"{prefix}_loss": loss,
f"{prefix}_{self.val_metric}": metric_tensor,
}
```
So I use the `debug_k.json` file to check the outputs. Sorry for the variable names.
One example for `bart-base`:
```
"1366": {
"input_ids": "<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s><s> Russian troops withdrawing from 3 Baltic Sea countries are reported to have respectively been stationed in the Baltic Sea states of Jalininggele,Simolingsike and Yelinia 300 kilometers away from Moscow.</s>"
},
```
one example for `bart-large`:
```
"1366": {
"input_ids": "<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>",
"labels": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "</s><s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>"
},
```
<|||||>@sshleifer I have changed the code (`3.3.1`) version in order to use the same processed decoder input for the model as the one used in transformer version `2.11.0` and it worked for both BARTs! Both BARTs (`facebook/bart-base` and `facebook/bart-large`) give good BLEU scores and generate good outputs!
The changed code:
```
def _step(self, batch: dict) -> Tuple:
pad_token_id = self.tokenizer.pad_token_id
src_ids, src_mask = batch["input_ids"], batch["attention_mask"]
if isinstance(self.model, T5ForConditionalGeneration):
tgt_ids = batch["labels"]
decoder_input_ids = self.model._shift_right(tgt_ids)
else:
#decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)
y = batch["labels"]
decoder_input_ids = y[:, :-1].contiguous()
tgt_ids = y[:, 1:].clone()
if not self.already_saved_batch: # This would be slightly better if it only happened on rank zero
batch["decoder_input_ids"] = decoder_input_ids
self.save_readable_batch(batch)
outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)
lm_logits = outputs[0]
if self.hparams.label_smoothing == 0:
# Same behavior as modeling_bart.py, besides ignoring pad_token_id
ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)
assert lm_logits.shape[-1] == self.vocab_size
loss = ce_loss_fct(lm_logits.view(-1, lm_logits.shape[-1]), tgt_ids.view(-1))
else:
lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)
loss, nll_loss = label_smoothed_nll_loss(
lprobs, tgt_ids, self.hparams.label_smoothing, ignore_index=pad_token_id
)
return (loss,)
```
an example generated by `facebook/bart-base` using the new code:
```
"1366": {
"input_ids": "<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": " It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s> Russian troops withdrawing from 3 Baltic Sea countries have been reported to be stationed respectively in Jalininggele, Simolingsike and Yelinia 300 kilometers (200 miles) from Moscow.</s><pad><pad>"
},
```
an example generated by `facebook/bart-large` using the new code:
```
"1366": {
"input_ids": "<s> ( report :ARG1 ( station :ARG1 ( troop :mod ( country :wiki Russia :name ( name :op1 Russia ) ) :ARG0-of ( withdraw :ARG2 ( country :quant 3 :location ( sea :wiki Baltic_Sea :name ( name :op1 Baltic :op2 Sea ) ) ) ) ) :ARG2 ( and :op1 ( state :wiki - :name ( name :op1 Jalininggele ) :location country ) :op2 ( state :wiki - :name ( name :op1 Simolingsike ) ) :op3 ( city :wiki - :name ( name :op1 Yelinia ) :location ( relative-position :op1 ( city :wiki Moscow :name ( name :op1 Moscow ) ) :quant ( distance-quantity :quant 300 :unit ( kilometer ) ) ) ) ) :mod ( respective ) ) )</s><pad><pad><pad>",
"labels": " It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.</s>",
"decoder_input_ids": "<s> It is reported that the Russian troops that withdrew from the three Baltic Sea countries will be stationed respectively in the Russian state of Jalininggele, the state of Simolingsike and Yelinia city which is 300 kilometers away from Moscow.",
"generated_ids": "</s> The Russian troop stations were respectively located in Jalininggele, Simolingsike and Yelinia located 300 kilometers (250 miles) away from Moscow in 3 countries on the Baltic Sea.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>"
},
```
What I don't understand is why the previous version only works for `bart-base`, in my experiments. Another question is what is the correct/better way to use the model (to use `shift_tokens_right` or another approach?)
<|||||>Interesting.
`shift_tokens_right` has always done better on my datasets, but it's interesting that you have the opposite experience. The old code `tgt_ids = y[:, 1:].clone()` doesn't work well for tokenizers (Marian, Pegasus, T5) that don't add a `<s>` token to the beginning of the sequence, because it deletes a token.
If you can replicate the results on a small/shareable dataset I would be happy to try to understand what's going on more deeply.<|||||>I can see a changing behavior of `bart-large` between v3.0.2 and v3.1.0, which seems to be linked to your findings. Here's a minimal example for language generation:
```py
import transformers
from transformers import (
BartTokenizer,
BartForConditionalGeneration,
)
print(f'** transformers v{transformers.__version__} **')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
input_txt = 'This is <mask> sentence.'
print(f'Input: "{input_txt}"')
inputs = tokenizer.encode(input_txt, return_tensors='pt')
outputs = model.generate(inputs)
output_txt = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f'Output: "{output_txt}"')
```
For v3.0.2, it correctly produces
```bash
** transformers v3.0.2 **
Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large and are newly initialized: ['final_logits_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Input: "This is <mask> sentence."
Output: "This is a partial sentence."
```
while v3.1.0 repeats the first token:
```bash
** transformers v3.1.0 **
Input: "This is <mask> sentence."
Output: "ThisThis is a sentence."
```<|||||>Digging a bit deeper, I can trace the issue back to this line https://github.com/huggingface/transformers/blob/4b3ee9cbc53c6cf6cee6bfae86cc2c6ec0778ee5/src/transformers/modeling_bart.py#L1114
and, in turn, the default value of `force_bos_token_to_be_generated`:
https://github.com/huggingface/transformers/blob/4b3ee9cbc53c6cf6cee6bfae86cc2c6ec0778ee5/src/transformers/configuration_bart.py#L140
To restore behavior from v3.0.2, we can change that value manually
```py
...
config = BartConfig.from_pretrained('facebook/bart-large')
config.force_bos_token_to_be_generated = True
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large', config=config)
...
```
which gives
```bash
** transformers v3.1.0 **
Input: "This is <mask> sentence."
Output: "This is a partial sentence."
```
and even
```bash
** transformers v3.4.0 **
Input: "This is <mask> sentence."
Output: "This is a partial sentence."
```
@sshleifer What's the best approach to fix this? Modify bart-large's config.json?<|||||>Your solution is awesome, great catch!
I think the right fix is to
+ Update the docs
+ add `task_specific_params : {'fill_mask': {'force_bos_token_to_be_generated': 'true'}` to `bart-base` and `bart-large` configs.
I am hesitant to change the default because `force_bos_token_to_be_generated = False` seems to be optimal for many fine-tuning tasks.<|||||>Added a mask filling example to the docs in #8421 .<|||||>:+1: Brilliant, thanks a lot @sshleifer !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello
I'm using transformers 4.8.2 but there's still issue about same problem.
I changed config.force_bos_token_to_be_generated=True.
********The Result**************
input_txt = 'This is <mask> sentence.'
output_txt = 'ThisThis is a sentence.'
anyone experience this??? <|||||>> Hello I'm using transformers 4.8.2 but there's still issue about same problem. I changed config.force_bos_token_to_be_generated=True. ********The Result************** input_txt = 'This is sentence.' output_txt = 'ThisThis is a sentence.'
>
> anyone experience this???
Hi @yeonsookKwak I have the same issue. Would you please share the solution if any? Thanks! |
transformers | 8,004 | closed | german medbert readme | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-23-2020 13:31:33 | 10-23-2020 13:31:33 | Closed in favor of #8002 |
transformers | 8,003 | closed | Create model card for bert-italian-cased-finetuned-pos | 10-23-2020 12:25:11 | 10-23-2020 12:25:11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.