url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5620/comments | https://api.github.com/repos/huggingface/transformers/issues/5620/events | https://github.com/huggingface/transformers/pull/5620 | 653,833,523 | MDExOlB1bGxSZXF1ZXN0NDQ2Njc0Mzc5 | 5,620 | Fix re-tokenization (ignoring is_pretokenized=True) when passing a pretokenized batch to both batch_encode_plus and tokenizer.__call__ methods | {
"login": "amoux",
"id": 20451397,
"node_id": "MDQ6VXNlcjIwNDUxMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/20451397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amoux",
"html_url": "https://github.com/amoux",
"followers_url": "https://api.github.com/users/amoux/followers",
"following_url": "https://api.github.com/users/amoux/following{/other_user}",
"gists_url": "https://api.github.com/users/amoux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amoux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amoux/subscriptions",
"organizations_url": "https://api.github.com/users/amoux/orgs",
"repos_url": "https://api.github.com/users/amoux/repos",
"events_url": "https://api.github.com/users/amoux/events{/privacy}",
"received_events_url": "https://api.github.com/users/amoux/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,594 | 1,598 | 1,598 | NONE | null | # Bug
> Fix unexpected behavior when passing already tokenized tokens (ignoring `is_pretokenized=True`) using `batch_encode_plus` and `self.__call__()`
```python
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
batch_sentences = ['The rapid expansion of the current COVID - 19 pandemic.',
'Hospitals around the globe have had to implement drastic changes']
batch_tokenized = [tokenizer.tokenize(x) for x in batch_sentences]
```
Correct output when passing either a single string or batch ✅
- Applies to types:
- `str` : one sentence
- `List[str]` : batch of sentences
```python
# also applies when using batch_encode_plus
inputs = tokenizer(batch_sentences, add_special_tokens=False)
for ids in inputs['input_ids']: print(tokenizer.decode(ids))
...
"The rapid expansion of the current COVID - 19 pandemic."
"Hospitals around the globe have had to implement drastic changes"
```
Incorrect output when passing either a sequence of tokens or in batch ❌
- Applies to types:
- `List[str]` : one sequence of string tokens
- `List[List[str]]` : batch of sequences of string tokens
```python
# also applies when using batch_encode_plus
inputs = tokenizer(batch_tokenized, add_special_tokens=False, is_pretokenized=True)
for ids in inputs['input_ids']: print(tokenizer.decode(ids))
...
"The rapid expansion of the current CO # # VI # # D - 19 pan # # de # # mic."
"Hospital # # s around the globe have had to implement drastic changes"
```
## Cause of issue
> The problem is when the sequences provided are a list of strings (pretokenized | batch of tokens) as we can observe in the output above, and the second condition in `get_input_ids()` completely disregards the truth that `is_pretokenized=True` by *re-tokenizing* a previously tokenized input!
```python
...
def get_input_ids(text):
if isinstance(text, str):
tokens = self.tokenize(text, **kwargs)
return self.convert_tokens_to_ids(tokens)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):
if is_pretokenized:
# If the user set is_pretokenized=True, then the input is a batch of token string sequences.
# The expected behavior is then to convert tokens to ids and not to re-tokenize - ``self.tokenize()``
tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text)))
return self.convert_tokens_to_ids(tokens)
else:
return self.convert_tokens_to_ids(text)
...
```
## Fix
> All I needed to do is flip the behavior. Easy fix!
```python
...
if is_pretokenized: # If already tokenized then, convert string tokens to token_ids
return self.convert_tokens_to_ids(text)
else: # Otherwise, tokenize to string tokens before converting to token_ids
tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text)))
return self.convert_tokens_to_ids(tokens)
...
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5620/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5620",
"html_url": "https://github.com/huggingface/transformers/pull/5620",
"diff_url": "https://github.com/huggingface/transformers/pull/5620.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5620.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5619/comments | https://api.github.com/repos/huggingface/transformers/issues/5619/events | https://github.com/huggingface/transformers/issues/5619 | 653,830,607 | MDU6SXNzdWU2NTM4MzA2MDc= | 5,619 | Should t5-small generate coherent text as summaries without finetuning? | {
"login": "marton-avrios",
"id": 59836119,
"node_id": "MDQ6VXNlcjU5ODM2MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59836119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marton-avrios",
"html_url": "https://github.com/marton-avrios",
"followers_url": "https://api.github.com/users/marton-avrios/followers",
"following_url": "https://api.github.com/users/marton-avrios/following{/other_user}",
"gists_url": "https://api.github.com/users/marton-avrios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marton-avrios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marton-avrios/subscriptions",
"organizations_url": "https://api.github.com/users/marton-avrios/orgs",
"repos_url": "https://api.github.com/users/marton-avrios/repos",
"events_url": "https://api.github.com/users/marton-avrios/events{/privacy}",
"received_events_url": "https://api.github.com/users/marton-avrios/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @marton-avrios can you share with me the example? Let me try them out as well and see if I found any edge cases where it is not coherent (subjectively). Can you also point out the dataset?\r\n\r\nWill be appreciated if you can share the example reference, dataset link, or your example source code.",
"just go to `examples/seq2seq` follow the instructions for obtaining the XSUM dataset and run\r\n```\r\npython run_eval.py t5-small xsum/val.source t5_val_generations.txt \\\r\n --reference_path xsum/val.target \\\r\n --score_path xsum_rouge.json \\\r\n --task summarization \\\r\n --n_obs 100 \\\r\n --device cuda \\\r\n --fp16 \\\r\n --bs 32\r\n```\r\nit creates relatively coherent text in `t5_val_generations.txt` which I would not expect from a model without any finetuning.",
"Ahh that one. I think it's pretrained already although I'm not sure which pretraining dataset. I think your doubt is that we shoould need a bit of training iteration for different datasets to make the model good? Both are news dataset so I won't be too surprised that we don't need additional iteration. I think the XSum highlights on one sentenced and shorter summary than CNN/Daily Mail so the label is different.",
"Ah, so they are already finetuned versions. I thought that `t5-small` and the other `t5-*` models were only trained on denoising tasks.",
"I ran t5-small for the CNN/DM test dataset and the output produced are meaningful complete sentences but not close to summaries. Which is expected because they are just pre-trained on a large corpus and not fine-tuned on summarization datasets explicitly. No, they are just pretrained versions and not fine-tuned ones.\r\n\r\nIn case if you want to fine-tune them on CNN/DM or X-Sum dataset, u can run the finetune_t5.sh script maybe and use the model saved to produce outputs again. you will surely find the fine-tuned ones performing better.",
"I see, that explains well the observation then. When you run the t5 model, you will be warned with the following:\r\n\r\n```\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nSo I guess you still need to train them for summarization task @marton-avrios ",
"I also apologies for the misinformation. I thought they were pretrained on CNN/DailyMail dataset as that is the impression I get from this [doc](https://huggingface.co/transformers/task_summary.html)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | I am following the summarization example and if I run `run_eval.py` for `t5-small` and `xsum` without finetuning I still get coherent, new (similar to source but not the same) and meaningful texts as summaries. The doc does not mention that it was pretrained on any kind of summarization task. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5618/comments | https://api.github.com/repos/huggingface/transformers/issues/5618/events | https://github.com/huggingface/transformers/pull/5618 | 653,824,548 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY3MDYx | 5,618 | Generate up to max_target_length sequences | {
"login": "tetsef",
"id": 58531890,
"node_id": "MDQ6VXNlcjU4NTMxODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/58531890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tetsef",
"html_url": "https://github.com/tetsef",
"followers_url": "https://api.github.com/users/tetsef/followers",
"following_url": "https://api.github.com/users/tetsef/following{/other_user}",
"gists_url": "https://api.github.com/users/tetsef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tetsef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tetsef/subscriptions",
"organizations_url": "https://api.github.com/users/tetsef/orgs",
"repos_url": "https://api.github.com/users/tetsef/repos",
"events_url": "https://api.github.com/users/tetsef/events{/privacy}",
"received_events_url": "https://api.github.com/users/tetsef/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=h1) Report\n> Merging [#5618](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.91%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5618 +/- ##\n==========================================\n- Coverage 77.79% 76.88% -0.92% \n==========================================\n Files 145 145 \n Lines 25355 25355 \n==========================================\n- Hits 19726 19495 -231 \n- Misses 5629 5860 +231 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=footer). Last update [fa5423b...db13af1](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer ",
"This should depend on `config.max_length`, no?\r\n\r\nConfig is here:\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json\r\nand we have the line:\r\nhttps://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/finetune.py#L66\r\n\r\nso I think we should continue to let the config determine the generation length.\r\nThe cost of this proposal is that people often set `max_target_length` shorter than optimal to make training run faster for training data, but leave `val_max_target_length` long to get a more accurate approximation of Rouge.",
"Were your summaries getting truncated like #5656 ?",
"That seems like a better solution. I had been using a different prefix, without an associated config, so the max_length must have defaulted to 20."
] | 1,594 | 1,594 | 1,594 | NONE | null | * Modifies the generate() call to allow for generation of sequences up to and including max_target_length number of tokens.
* Previous to this commit, implementation caps generation at 20 tokens and may result in poor performance.
* See related recent generation_utils.py commit: https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L139 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5618/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5618",
"html_url": "https://github.com/huggingface/transformers/pull/5618",
"diff_url": "https://github.com/huggingface/transformers/pull/5618.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5618.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5617/comments | https://api.github.com/repos/huggingface/transformers/issues/5617/events | https://github.com/huggingface/transformers/pull/5617 | 653,796,023 | MDExOlB1bGxSZXF1ZXN0NDQ2NjQzNzAy | 5,617 | Update README.md | {
"login": "bashartalafha",
"id": 26685171,
"node_id": "MDQ6VXNlcjI2Njg1MTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/26685171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bashartalafha",
"html_url": "https://github.com/bashartalafha",
"followers_url": "https://api.github.com/users/bashartalafha/followers",
"following_url": "https://api.github.com/users/bashartalafha/following{/other_user}",
"gists_url": "https://api.github.com/users/bashartalafha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bashartalafha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bashartalafha/subscriptions",
"organizations_url": "https://api.github.com/users/bashartalafha/orgs",
"repos_url": "https://api.github.com/users/bashartalafha/repos",
"events_url": "https://api.github.com/users/bashartalafha/events{/privacy}",
"received_events_url": "https://api.github.com/users/bashartalafha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=h1) Report\n> Merging [#5617](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.31%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5617 +/- ##\n==========================================\n- Coverage 77.79% 77.48% -0.32% \n==========================================\n Files 145 145 \n Lines 25355 25355 \n==========================================\n- Hits 19726 19647 -79 \n- Misses 5629 5708 +79 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.44% <0.00%> (-6.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=footer). Last update [fa5423b...b25f69f](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5617/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5617",
"html_url": "https://github.com/huggingface/transformers/pull/5617",
"diff_url": "https://github.com/huggingface/transformers/pull/5617.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5617.patch",
"merged_at": 1594395807000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5616/comments | https://api.github.com/repos/huggingface/transformers/issues/5616/events | https://github.com/huggingface/transformers/pull/5616 | 653,743,137 | MDExOlB1bGxSZXF1ZXN0NDQ2NjAxNDM0 | 5,616 | fix 404 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5616/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5616",
"html_url": "https://github.com/huggingface/transformers/pull/5616",
"diff_url": "https://github.com/huggingface/transformers/pull/5616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5616.patch",
"merged_at": 1594321949000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5615/comments | https://api.github.com/repos/huggingface/transformers/issues/5615/events | https://github.com/huggingface/transformers/issues/5615 | 653,700,819 | MDU6SXNzdWU2NTM3MDA4MTk= | 5,615 | 🐛 Bart Tokenization difference between 2.11.0 and 3.0.2 | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This line :\r\n\r\nhttps://github.com/huggingface/transformers/blob/b42586ea560a20dcadb78472a6b4596f579e9043/src/transformers/tokenization_utils.py#L1709\r\n\r\nwas changed to :\r\n\r\nhttps://github.com/huggingface/transformers/blob/b0892fa0e8df02d683e05e625b3903209bff362d/src/transformers/tokenization_utils.py#L505\r\n\r\n---\r\n\r\nIn `2.11.0`, if `add_special_tokens` was `True` (which was the default value), then the RoBERTa tokenizer would add automatically the prefix space.\r\n\r\nIn `3.0.2`, `add_special_tokens` is still `True` by default, but is not passed to `tokenize()` anymore. RoBERTa tokenizer does not add a prefix space, which lead to the difference observed.\r\n\r\n---\r\n\r\nSo the fixed code for `3.0.2` is :\r\n\r\n```python\r\nfrom transformers import BartTokenizer\r\ntokenizer = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\nprint(tokenizer.batch_encode_plus([\"This is an example\"], add_prefix_space=True))\r\n```\r\n\r\n---\r\n\r\n_Not closing yet as I would like to know if this is an expected breaking changes or not._",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | # 🐛 Bug
Running this code :
```python
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
print(tokenizer.batch_encode_plus(["This is an example"]))
```
in `transformers` `2.11.0` and `3.0.2` gives different results.
`transformers` `2.11.0` :
> {'input_ids': [[0, 152, 16, 41, 1246, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1]]}
`transformers` `3.0.2` :
> {'input_ids': [[0, 713, 16, 41, 1246, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1]]}
---
Colab for reproducing :
* [`2.11.0`](https://colab.research.google.com/drive/1qwYkcZoD1JtuLLDjABngJFoDUD06RiXm?usp=sharing)
* [`3.0.2`](https://colab.research.google.com/drive/1qUWcCUYInpa9Lwy2Ur-t3N3hImF1grCT?usp=sharing)
---
Is it from the refactoring of `generation_utils.py` ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5615/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5614/comments | https://api.github.com/repos/huggingface/transformers/issues/5614/events | https://github.com/huggingface/transformers/pull/5614 | 653,677,115 | MDExOlB1bGxSZXF1ZXN0NDQ2NTUwNzQw | 5,614 | [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=h1) Report\n> Merging [#5614](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a8ae27617e3c4dafb34bcbbaadf4ceee28583bd&el=desc) will **increase** coverage by `0.31%`.\n> The diff coverage is `98.61%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5614 +/- ##\n==========================================\n+ Coverage 78.49% 78.81% +0.31% \n==========================================\n Files 146 146 \n Lines 26335 26396 +61 \n==========================================\n+ Hits 20671 20803 +132 \n+ Misses 5664 5593 -71 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <94.73%> (+62.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `86.61% <100.00%> (+1.43%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `92.96% <100.00%> (+11.98%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.00% <100.00%> (+0.98%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-3.76%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=footer). Last update [8a8ae27...dd85766](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,596 | 1,596 | MEMBER | null | …hoice} models and tests
The remaining TF tests pass with TF2.3. Waiting to unpin TF before merge. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5614/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5614/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5614",
"html_url": "https://github.com/huggingface/transformers/pull/5614",
"diff_url": "https://github.com/huggingface/transformers/pull/5614.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5614.patch",
"merged_at": 1596047186000
} |
https://api.github.com/repos/huggingface/transformers/issues/5613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5613/comments | https://api.github.com/repos/huggingface/transformers/issues/5613/events | https://github.com/huggingface/transformers/pull/5613 | 653,661,500 | MDExOlB1bGxSZXF1ZXN0NDQ2NTM4MDg5 | 5,613 | doc fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=h1) Report\n> Merging [#5613](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7d0ef0042042899b67867a4e2962d8e97fb5c6f5&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5613 +/- ##\n=======================================\n Coverage 76.88% 76.88% \n=======================================\n Files 145 145 \n Lines 25355 25355 \n=======================================\n+ Hits 19494 19495 +1 \n+ Misses 5861 5860 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5613/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=footer). Last update [7d0ef00...f052965](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | a few minor doc improvements. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5613/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5613",
"html_url": "https://github.com/huggingface/transformers/pull/5613",
"diff_url": "https://github.com/huggingface/transformers/pull/5613.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5613.patch",
"merged_at": 1594252364000
} |
https://api.github.com/repos/huggingface/transformers/issues/5612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5612/comments | https://api.github.com/repos/huggingface/transformers/issues/5612/events | https://github.com/huggingface/transformers/issues/5612 | 653,610,640 | MDU6SXNzdWU2NTM2MTA2NDA= | 5,612 | Did the run_language_model support TPU? | {
"login": "lai-agent-t",
"id": 64478368,
"node_id": "MDQ6VXNlcjY0NDc4MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/64478368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lai-agent-t",
"html_url": "https://github.com/lai-agent-t",
"followers_url": "https://api.github.com/users/lai-agent-t/followers",
"following_url": "https://api.github.com/users/lai-agent-t/following{/other_user}",
"gists_url": "https://api.github.com/users/lai-agent-t/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lai-agent-t/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lai-agent-t/subscriptions",
"organizations_url": "https://api.github.com/users/lai-agent-t/orgs",
"repos_url": "https://api.github.com/users/lai-agent-t/repos",
"events_url": "https://api.github.com/users/lai-agent-t/events{/privacy}",
"received_events_url": "https://api.github.com/users/lai-agent-t/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I totally agree with you that the transformers team needs to address this issue from a long time ago. I am also struggle to run token classification using TPU. Google gives TPUv3-8 as a part of google collab for only 9$ which equivalent to 8xV100 GPU. Yet until now, we can't run transformers using TPU. This should be a top priority for the transformers team. at least we need only one running example using token classification NER. I managed to do it using XLA but its nowhere near TPU performance.",
"Hi, thank you for opening this issue.\r\n\r\n@lai-agent-t, did you complete training on the TPU, or did you stop beforehand? If you stopped, was the tokenization process already finished?\r\n\r\n@NLPPower, three NER scripts are available in this repository: NER with Trainer, with TFTrainer, and with Pytorch Lightning. All three support TPU. Did you get bad performance/slow training when using those scripts?",
"I'm NOT stop beforehand, I updated the num_train_epochs latter to 10 and I trained 6 epochs and it takes me almost 2 hours with only 3000 sentences ",
"I see, thanks. In your TPU environment, do you mind running the following (please make sure you have transformers installed from source)?\r\n\r\n```py\r\nfrom transformers.file_utils import is_torch_tpu_available\r\n\r\nprint(is_torch_tpu_available())\r\n```\r\n\r\nThank you!\r\n",
"> Hi, thank you for opening this issue.\r\n> \r\n> @lai-agent-t, did you complete training on the TPU, or did you stop beforehand? If you stopped, was the tokenization process already finished?\r\n> \r\n> @NLPPower, three NER scripts are available in this repository: NER with Trainer, with TFTrainer, and with Pytorch Lightning. All three support TPU. Did you get bad performance/slow training when using those scripts?\r\n\r\nI struggled to run NER classifier using ALBERT model in TPU using TensorFlow . XLA with PyTorch will not give you a great performance compared to pure TF. Plus it doesn't support fp16 which could cut the fine-tuning time by 4x times . I tested fp16 using v100 and i was able to exceed the performance of PyTorch using TPU where i used docker and TF nightly. to confirm my finding please have a look at the Performance Evaluation table at the bottom of this page.\r\nhttps://github.com/allenai/tpu_pretrain\r\nyou can see that TPU in TF is almost 4x-6x faster than Pytorhc + XLA in TPU.\r\nIf you can just create a simple example in google colab where transformer was able to run in TPU in TF for token classification task ( NER ) i will be more than happy, because i struggled to do it since two weeks and there is also couple of folks here who struggled to do it. This should be high priority for transformer team because TPU access can give researcher a powerful resource for almost free using kaggle and google colab.\r\nPlease have a look also at this project which is the closest thing i could find to run NER in TPU using distributed strategy in top of keras.\r\nhttps://github.com/kyzhouhzau/NLPGNN/tree/master/tests/NER/NER_EN",
"I'm sure my tork_tpu is available, because I test the example case you put on the tpu case: \r\npython examples/xla_spawn.py --num_cores 8 \\\r\n\texamples/text-classification/run_glue.py\r\n\t--model_name_or_path bert-base-cased \\\r\n\t--task_name mnli \\\r\n\t--data_dir ./data/glue_data/MNLI \\\r\n\t--output_dir ./models/tpu \\\r\n\t--overwrite_output_dir \\\r\n\t--do_train \\\r\n\t--do_eval \\\r\n\t--num_train_epochs 1 \\\r\n\t--save_steps 20000\r\nit works without any error, but the Utilization of TPU Matrix Units (higher is better) is 5% and it stable\r\n\r\nSo, I'm feel confuse is run_language_model.py support TPU?",
"same here, is there any update?",
"> same here, is there any update?\r\n\r\nI have change to tensorflow 2.0 instead of pytorch ...",
"any updates on pytorch?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,604 | 1,604 | NONE | null | # ❓ Questions & Help
I try to using own dataset on tpu with running run_language_model.py, the command is I use below:
python examples/xla_spawn.py --num_cores 8 examples/language-modeling/run_language_modeling.py --model_name_or_path hfl/chinese-bert-wwm --output_dir model/tpu --train_data_file /Language_masked_model/data/toy_MLM_data.txt --line_by_line --mlm --block_size 512 --do_train --evaluate_during_training --per_device_train_batch_size 10 --tpu_num_cores 8 --debug --num_train_epochs 1 --save_steps 20000
**No errors but I assume it not use TPU,** I mentor the usage of TPU, get info below:
Cloud TPU Monitoring Results (Sample 20 ):
TPU type: TPU v3
Utilization of TPU Matrix Units (higher is better): 0.000%
Cloud TPU Monitoring Results (Sample 21 ):
TPU type: TPU v3
Utilization of TPU Matrix Units (higher is better): 0.000%
Cloud TPU Monitoring Results (Sample 22 ):
TPU type: TPU v3
Number of TPU cores: 1 (Replica count = 8, num cores per replica = 1)
TPU idle time (lower is better): 0.027%
Utilization of TPU Matrix Units (higher is better): 0.039%
Step time: 11.1ms (avg), 11.1ms (min), 11.1ms (max)
Infeed percentage: 0.000% (avg), 0.000% (min), 0.000% (max)
Cloud TPU Monitoring Results (Sample 23 ):
TPU type: TPU v3
Utilization of TPU Matrix Units (higher is better): 0.000%
Cloud TPU Monitoring Results (Sample 24 ):
TPU type: TPU v3
Utilization of TPU Matrix Units (higher is better): 0.000%
**My question is did run_language_model.py support TPU?**
tpu: V3.8 on Google Cloud Platform
tensorflow==2.2.0
torch==1.7.0a0+12b5bdc
torch-xla==1.6+5430aca
I use offical docker on XLA (gcr.io/tpu-pytorch/xla:nightly_3.6) repo
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5612/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5612/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5611/comments | https://api.github.com/repos/huggingface/transformers/issues/5611/events | https://github.com/huggingface/transformers/issues/5611 | 653,536,334 | MDU6SXNzdWU2NTM1MzYzMzQ= | 5,611 | IndexError: index out of range in self | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"how did you solve this problem. Can you share your solution.",
"Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model.",
"@zhunipingan I had to trim the length of the sentence to 200 After it worked fine.",
"HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024.\r\n\r\nFor your second question, indeed your model is not on your GPU. With PyTorch, you have to cast your model to the device you want it to run it, so you would have to do something like:\r\n\r\n```py\r\nfrom transformers import BertModel, BertConfig, BertTokenizer\r\nimport torch\r\n \r\ntokenizer = BertTokenizer.from_pretrained('bert-large-uncased')\r\nmodel = BertModel.from_pretrained('bert-large-uncased')\r\ninputs = tokenizer(datar[7], return_tensors=\"pt\")\r\n\r\nmodel.cuda()\r\ninputs = {k: v.cuda() for k, v in inputs.items()}\r\n\r\noutputs = model(**inputs)\r\nfeatures = outputs[0][:,0,:].detach().numpy().squeeze()\r\n```\r\n\r\nPlease note I've also cast the input tensors on GPU, as the model inputs need to be on the same device as the model.\r\n\r\nI recommend looking at the[ CUDA part of the 60 minute blitz tutorial for PyTorch on the PyTorch website ](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors)to get an understanding of the CUDA semantics.\r\n\r\nClosing this for now, let me know if you have other issues.",
"Anyone can help?\r\n\r\nI’m not sure this is a bug or not.\r\n\r\nI need to deploy the AWS elastic inference for our service. The Elastic Inference requires using CPU to load and run models.\r\n\r\nbut our code runs well on GPUs, but CPU.\r\n\r\nas the simple code below\r\n\r\n```\r\n###CPUs returns index out of range in self error\r\nimport numpy as np\r\nimport torch\r\nimport torch.nn as nn\r\n\r\nsinusoid_table = torch.FloatTensor(torch.Size([50 + 1, 512]))\r\n\r\npos_emb = nn.Embedding.from_pretrained(sinusoid_table, freeze=True)\r\npositions = torch.arange(200).expand(1, 200).contiguous()+1\r\npositions=positions\r\na= pos_emb(positions)\r\nprint(a)\r\n\r\n###on GPUs this runs well\r\nimport torch\r\nimport torch.nn as nn\r\n\r\ndevice = torch.device(‘cuda:0’)\r\n\r\nsinusoid_table = torch.FloatTensor(torch.Size([50 + 1, 512])).to(device)\r\npos_emb = nn.Embedding.from_pretrained(sinusoid_table, freeze=True).to(device)\r\npositions = torch.arange(200).expand(1, 200).contiguous()+1\r\npositions=positions.to(device)\r\na= pos_emb(positions)\r\nprint(a)\r\n```\r\nI highly appreciate your helps. Thank you.",
"This doesn't seem like a `transformers` issue, but more of a PyTorch issue? You're not using `transformers` in your script.",
"> Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model.\r\n\r\nThanks very much.\r\nIt works for me after making vocab_size larger in bert config.",
"Thanks a lot for your help here...I am still having troubles running a similar code. Did you managed to run it in the end? Would you mind sharing how you embedded the vocab_size part?\r\n\r\n```\r\nclassifier = pipeline('sentiment-analysis', model = \"cardiffnlp/twitter-roberta-base-sentiment\")\r\n\r\ndf = (\r\n df\r\n .assign(sentiment = lambda x: x['Content'].apply(lambda s: classifier(s)))\r\n .assign(\r\n label = lambda x: x['sentiment'].apply(lambda s: (s[0]['label'])),\r\n score = lambda x: x['sentiment'].apply(lambda s: (s[0]['score']))\r\n )\r\n)\r\n```",
">Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model.\r\n\r\n\r\nDo you know how can I do this? I tried by using: \r\n\r\n configuration = BertConfig(vocab_size=30_522)\r\n BertModel(config=configuration).from_pretrained('bert-base-cased')\r\n\r\nbut it does not work ...\r\nI am a bit confused since it looks to me that my model is not accepting values higher than 29000... How is this possible?\r\n\r\n\r\n",
"> \r\n\r\nHi,\r\n\r\nI met the same problem as you did.\r\n\r\nYou can try `model.config.vocab_size` to find the vacob_size of your model. If your pretrained model is 'bert-base-cased', vacob_size will be 28996. But for 'bert-base-uncased', it's 30522.\r\n\r\nI'm not sure if it will work for you. (I don't think we can reset vocab_size for pretrained model.",
"Thanks, that's It actually. I Also realised It too late... So much time Lost :-D",
"> Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model.\r\n\r\nThanks for pointing out so precisely, though I am wondering how you came to know, I mean the process... Did you debugged in the stack trace till its root or you are contributor to transformers or torch libraries, so it came naturally to you?\r\n\r\nI faced this issue while implementing XLM-RoBERTa. Here is how I fixed this:\r\n\r\n xlmr_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large')\r\n config = XLMRobertaConfig() \r\n config.vocab_size = xlmr_tokenizer.vocab_size # setting both to have same vocab size",
"please how do i set the vocab set to exceed 1024",
"> HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024.\r\n\r\n@LysandreJik Is there anyway we can change the limit? Trying to process a large document. I am using `facebook/bart-large-cnn`\r\n\r\nThanks.",
"> > HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024.\r\n> \r\n> @LysandreJik Is there anyway we can change the limit? Trying to process a large document. I am using `facebook/bart-large-cnn`\r\n> \r\n> Thanks.\r\n\r\nTry using the Longformer transformer. The pre-trained ones on huggingface can process up to 16k tokens. I used it for my dissertation where I was processing large documents",
"> > > HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024.\r\n> > \r\n> > \r\n> > @LysandreJik Is there anyway we can change the limit? Trying to process a large document. I am using `facebook/bart-large-cnn`\r\n> > Thanks.\r\n> \r\n> Try using the Longformer transformer. The pre-trained ones on huggingface can process up to 16k tokens. I used it for my dissertation where I was processing large documents\r\n\r\nAh, thanks! Will try it. "
] | 1,594 | 1,665 | 1,596 | NONE | null | # 🐛 Bug
## Information
The model I am using Bert ('bert-large-uncased') and I am facing two issues related to this model
The language I am using the model on English
The problem arises when using:
When I am trying to encode a large sentence ( sentence length 500 words ), I am getting this error :
`IndexError: index out of range in self`
I tried to set max_words length as 400, still getting same error :
Data I am using can be downloaded like this :
```
from sklearn.datasets import fetch_20newsgroups
import re
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42)
print("\n".join(twenty_train.data[0].split("\n")[:3]))
X_tratado = []
for email in range(0, len(twenty_train.data)):
# Remover caracteres especiais
texto = re.sub(r'\\r\\n', ' ', str(twenty_train.data[email]))
texto = re.sub(r'\W', ' ', texto)
# Remove caracteres simples de uma letra
texto = re.sub(r'\s+[a-zA-Z]\s+', ' ', texto)
texto = re.sub(r'\^[a-zA-Z]\s+', ' ', texto)
# Substitui multiplos espaços por um unico espaço
texto = re.sub(r'\s+', ' ', texto, flags=re.I)
# Remove o 'b' que aparece no começo
texto = re.sub(r'^b\s+', '', texto)
# Converte para minúsculo
texto = texto.lower()
X_tratado.append(texto)
dr = {}
dr ['text'] = X_tratado
dr ['labels'] = twenty_train.target
```
Now I am using bert model to encode the sentences :
```
from transformers import BertModel, BertConfig, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = BertModel.from_pretrained('bert-large-uncased')
inputs = tokenizer(datar[7], return_tensors="pt")
outputs = model(**inputs)
features = outputs[0][:,0,:].detach().numpy().squeeze()
```
Which is giving this error :
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-41-5dcf440b245f> in <module>
5 model = BertModel.from_pretrained('bert-large-uncased')
6 inputs = tokenizer(datar[7], return_tensors="pt")
----> 7 outputs = model(**inputs)
8 features = outputs[0][:,0,:].detach().numpy().squeeze()
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states)
751
752 embedding_output = self.embeddings(
--> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
754 )
755 encoder_outputs = self.encoder(
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
177 if inputs_embeds is None:
178 inputs_embeds = self.word_embeddings(input_ids)
--> 179 position_embeddings = self.position_embeddings(position_ids)
180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
```
The second issue I am facing, When I am using this bert model to encode many sentences, It seems Bert is not using GPU :

How to accelerate GPU while using bert model?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: '3.0.0'
- Platform: Ubuntu 18.04.4 LTS
- Python version: python3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): '2.2.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5611/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5610/comments | https://api.github.com/repos/huggingface/transformers/issues/5610/events | https://github.com/huggingface/transformers/pull/5610 | 653,482,406 | MDExOlB1bGxSZXF1ZXN0NDQ2MzkyNjE3 | 5,610 | create model cards for qg models | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=h1) Report\n> Merging [#5610](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/40d98ebf50c4662bcd6dce6395bbed0b2142ea52&el=desc) will **increase** coverage by `1.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5610 +/- ##\n==========================================\n+ Coverage 76.88% 78.11% +1.23% \n==========================================\n Files 145 145 \n Lines 25351 25351 \n==========================================\n+ Hits 19491 19804 +313 \n+ Misses 5860 5547 -313 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=footer). Last update [40d98eb...d2b586a](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks great!",
"This is really excellent work @patil-suraj and thanks for the thorough documentation."
] | 1,594 | 1,594 | 1,594 | MEMBER | null | cc @julien-c , @danielduckworth | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5610/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5610",
"html_url": "https://github.com/huggingface/transformers/pull/5610",
"diff_url": "https://github.com/huggingface/transformers/pull/5610.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5610.patch",
"merged_at": 1594238937000
} |
https://api.github.com/repos/huggingface/transformers/issues/5609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5609/comments | https://api.github.com/repos/huggingface/transformers/issues/5609/events | https://github.com/huggingface/transformers/issues/5609 | 653,429,077 | MDU6SXNzdWU2NTM0MjkwNzc= | 5,609 | Duplicate grouped entities when using 'ner' pipeline | {
"login": "JamesDeAntonis",
"id": 33379057,
"node_id": "MDQ6VXNlcjMzMzc5MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDeAntonis",
"html_url": "https://github.com/JamesDeAntonis",
"followers_url": "https://api.github.com/users/JamesDeAntonis/followers",
"following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs",
"repos_url": "https://api.github.com/users/JamesDeAntonis/repos",
"events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you check whether this still occurs after recently merged #4987? ",
"Thanks for the response.\r\n\r\nIs there a special repo I have to pull from or can I just update transformers. Assuming the latter, I just re-ran `pip install --upgrade transformers`. After doing this, the bug persists.",
"No, you would have to install from source as explained in the readme.",
"Just cloned the repo (as directed in readme) and noticed that the issue was resolved! Any estimation when the next update will be released?",
"I was still having problems similar to issues #5077 #4816 #5377 \r\n\r\nAfter some debugging these are the possible reasons & fixes for wrong groupings:\r\n\r\nLooking for feedback from maintainers on my [WIP] PR https://github.com/huggingface/transformers/pull/5970\r\n\r\n- [ ] [Bug Fix] add an option `ignore_subwords` to ignore subsequent ##wordpieces in predictions. Because some models train on only the first token of a word and not on the subsequent wordpieces (BERT NER default). So it makes sense doing the same thing at inference time.\r\n\r\n - The simplest fix is to just group the subwords with the first wordpiece. \r\n - [TODO] how to handle ignored scores? just set them to 0 and calculate zero invariant mean ?\r\n - [TODO] handle different wordpiece_prefix ## ? possible approaches:\r\nget it from tokenizer? but currently most tokenizers dont have a wordpiece_prefix property?\r\nhave an _is_subword(token)\r\n\r\n- [ ] [Bug Fix] Shouldn't group entities that are both 'B' even if they are same type \r\n\r\n - (B-type1 B-type1) != (B-type1 I-type1)\r\n\r\n- [ ] [Feature add] added option to `skip_special_tokens`. Cause It was harder to remove them after grouping.\r\n\r\n- [ ] [Additional Changes] remove B/I prefix on returned grouped_entities \r\n\r\n- [ ] [Feature Request/TODO] Return indexes?\r\n\r\n- [ ] [Bug TODO] can't use fast tokenizer with grouped_entities ('BertTokenizerFast' object has no attribute 'convert_tokens_to_string')\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): 'ner' pipeline
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Have transformers 3.0.2 installed
2. Run the below code
```python
from transformers import pipeline
nlp = pipeline('ner', grouped_entities=True)
nlp('Welcome to New York')
```
## Expected behavior
We should receive `[{'entity_group': 'I-LOC', 'score': 0.9984402656555176, 'word': 'New York'}`, but instead the output has duplicated 'New York': `[{'entity_group': 'I-LOC', 'score': 0.9984402656555176, 'word': 'New York'}, {'entity_group': 'I-LOC', 'score': 0.9984402656555176, 'word': 'New York'}]`.
### The Cause of the Issue According to Me
After reading 3.0.2, I noticed that lines 1047-1049 were added. I think this was done to fix a prior issue that caused the last named entity in the sequence to be occasionally omitted when `grouped_entities=True`. Long story short, I think this snippet was a patch that only shifted the problem from being an occasional named entity omission to an occasional named entity duplicate.
The for-loop that precedes this snippet is inconsistent in that sometimes the last named entity gets successfully added anyway (e.g. if the `if` clause on 1025 (first iteration) or 1032 is entered on the last iteration). In this case, there is a duplicate entry upon the calling of the new code at 1047. On the converse, the last named entity won’t be added if the `else` clause in line 1041 is entered on the last iteration. In this case, the final named entity correctly gets added after the new code snippet is run.
In short, there is a duplicate (I think) if (i) there is only one recognized named entity or (ii) the last named entity is one such that the tokenizer cut it up into multiple tokens. Otherwise, there is no duplicate.
nlp(‘Welcome to Dallas’) -> duplicate 'Dallas' because 'Dallas' is the only named entity
nlp(‘HuggingFace is not located in Dallas’) -> no duplicate because there are multiple entities and the final one 'Dallas' is not tokenized into multiple tokens
nlp(‘HuggingFace is located in New York City’) -> duplicate ‘New York City’ because the final named entity 'New York City' is tokenized into multiple tokens
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1031-azure-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5608/comments | https://api.github.com/repos/huggingface/transformers/issues/5608/events | https://github.com/huggingface/transformers/issues/5608 | 653,428,433 | MDU6SXNzdWU2NTM0Mjg0MzM= | 5,608 | Is there an implementation of BERT architecture in PyTorch that I can modify here? | {
"login": "abhisheknovoic",
"id": 62595485,
"node_id": "MDQ6VXNlcjYyNTk1NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheknovoic",
"html_url": "https://github.com/abhisheknovoic",
"followers_url": "https://api.github.com/users/abhisheknovoic/followers",
"following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheknovoic/orgs",
"repos_url": "https://api.github.com/users/abhisheknovoic/repos",
"events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheknovoic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Yes, you can modify the BERT architecture as you please, it's self contained. It's in the [modeling_bert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py) file.",
"Thanks @LysandreJik , can you also confirm if this implementation supports multi gpu training? ",
"This implementation is a PyTorch model, so it supports everything a PyTorch model can do :) GPU, Multi-GPU, TPU, you name it.",
"Hello @LysandreJik and team, I am looking at the script `run_language_modeling.py` at https://github.com/huggingface/transformers/tree/master/examples/language-modeling . \r\n\r\nI saw that the example uses WikiText-2 dataset for example. If I want to fine-tune BERT on my own dataset, how should the dataset be structured? Should I mask the words myself or is there some DataLoader that will do that?\r\n\r\nI downloaded the WikiText data and I saw an example chunk of text is \r\n```\r\n = Robert <unk> =\r\n\r\n Robert <unk> is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . This was followed by a starring role in the play Herons written by Simon Stephens , which was performed in 2001 at the Royal Court Theatre . He had a guest role in the television series Judge John <unk> in 2002 . In 2004 <unk> landed a role as \" Craig \" in the episode \" Teddy 's Story \" of the television series The Long Firm ; he starred alongside actors Mark Strong and Derek Jacobi . He was cast in the 2005 theatre productions of the Philip Ridley play Mercury Fur , which was performed at the Drum Theatre in Plymouth and the <unk> <unk> Factory in London . He was directed by John <unk> and starred alongside Ben <unk> , Shane <unk> , Harry Kent , Fraser <unk> , Sophie Stanton and Dominic Hall .\r\n In 2006 , <unk> starred alongside <unk> in the play <unk> written by Mark <unk> . He appeared on a 2006 episode of the television series , Doctors , followed by a role in the 2007 theatre production of How to Curse directed by <unk> <unk> . How to Curse was performed at Bush Theatre in the London Borough of <unk> and Fulham . <unk> starred in two films in 2008 , <unk> <unk> by filmmaker Paris <unk> , and <unk> Punch directed by <unk> Blackburn . In May 2008 , <unk> made a guest appearance on a two @-@ part episode arc of the television series Waking the Dead , followed by an appearance on the television series <unk> in November 2008 . He had a recurring role in ten episodes of the television series <unk> in 2010 , as \" <unk> Fletcher \" . <unk> starred in the 2011 film <unk> directed by Paris <unk> .\r\n\r\n = = Career = =\r\n```\r\n\r\nIn my case I have a large set of text files. Just text files with free text inside it. Can someone point me to a document/resource that lets me understand how should the input be for masked language modelling pretraining using BERT?\r\n\r\nI plan to use the `https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py` file contents, modify the layers a bit based on my architecture decisions and train it on my own dataset using masked language modeling where random words are masked and I predict them back.\r\n\r\nAny help is appreciated. Thanks",
"Hi @abhisheksgumadi, this is a very interesting and rather broad question. Could you ask it on the forums over on https://discuss.huggingface.co? Thanks a lot!"
] | 1,594 | 1,594 | 1,594 | NONE | null | Hello Team,
Firstly, thanks for this amazing repo.
I am doing my own research and I want access to a native implementation of BERT in PyTorch so I can modify the architecture and play with it by including a few of my own modules.
Is that possible with the codebase in HuggingFace repo here?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5608/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5607/comments | https://api.github.com/repos/huggingface/transformers/issues/5607/events | https://github.com/huggingface/transformers/pull/5607 | 653,419,435 | MDExOlB1bGxSZXF1ZXN0NDQ2MzQyNTU5 | 5,607 | docs(wandb): explain how to use W&B integration | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=h1) Report\n> Merging [#5607](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/40d98ebf50c4662bcd6dce6395bbed0b2142ea52&el=desc) will **increase** coverage by `1.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5607 +/- ##\n==========================================\n+ Coverage 76.88% 78.11% +1.23% \n==========================================\n Files 145 145 \n Lines 25351 25351 \n==========================================\n+ Hits 19491 19804 +313 \n+ Misses 5860 5547 -313 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=footer). Last update [40d98eb...f38d33b](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Documentation on how to use W&B integration has been added to clear up confusion on how to customize logging.
fix #5262 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5607",
"html_url": "https://github.com/huggingface/transformers/pull/5607",
"diff_url": "https://github.com/huggingface/transformers/pull/5607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5607.patch",
"merged_at": 1594717954000
} |
https://api.github.com/repos/huggingface/transformers/issues/5606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5606/comments | https://api.github.com/repos/huggingface/transformers/issues/5606/events | https://github.com/huggingface/transformers/issues/5606 | 653,366,047 | MDU6SXNzdWU2NTMzNjYwNDc= | 5,606 | OSError using FlauBERT | {
"login": "anislll",
"id": 61878147,
"node_id": "MDQ6VXNlcjYxODc4MTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/61878147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anislll",
"html_url": "https://github.com/anislll",
"followers_url": "https://api.github.com/users/anislll/followers",
"following_url": "https://api.github.com/users/anislll/following{/other_user}",
"gists_url": "https://api.github.com/users/anislll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anislll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anislll/subscriptions",
"organizations_url": "https://api.github.com/users/anislll/orgs",
"repos_url": "https://api.github.com/users/anislll/repos",
"events_url": "https://api.github.com/users/anislll/events{/privacy}",
"received_events_url": "https://api.github.com/users/anislll/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | Hello everyone,
I'm trying to run Flaubert model on my data using ktrain.
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): flaubert/flaubert_base_cased
Language I am using the model on (English, Chinese ...): French
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I'm using ktrain to load my model.
After preprocess my data when i want to get the classifier with function get_classifier() i get this error :
---------------------------------------------------------------------------
```
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 if resolved_archive_file is None:
--> 462 raise EnvironmentError
463 except EnvironmentError:
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in _load_pretrained(self, mname, num_labels)
958 try:
--> 959 model = self.model_type.from_pretrained(mname, config=self.config)
960 except:
~\Anaconda3\lib\site-packages\transformers\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1046 if isinstance(config, config_class):
-> 1047 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
1048 raise ValueError(
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
468 )
--> 469 raise EnvironmentError(msg)
470 if resolved_archive_file == archive_file:
OSError: Can't load weights for 'flaubert/flaubert_base_cased'. Make sure that:
- 'flaubert/flaubert_base_cased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'flaubert/flaubert_base_cased' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in _load_pretrained(self, mname, num_labels)
961 try:
--> 962 model = self.model_type.from_pretrained(mname, config=self.config, from_pt=True)
963 except:
~\Anaconda3\lib\site-packages\transformers\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1046 if isinstance(config, config_class):
-> 1047 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
1048 raise ValueError(
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
481 # Load from a PyTorch checkpoint
--> 482 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
483
~\Anaconda3\lib\site-packages\transformers\modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)
92 return load_pytorch_weights_in_tf2_model(
---> 93 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
94 )
~\Anaconda3\lib\site-packages\transformers\modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)
124 if tf_inputs is not None:
--> 125 tf_model(tf_inputs, training=False) # Make sure model is built
126
~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
~\Anaconda3\lib\site-packages\transformers\modeling_tf_xlm.py in call(self, inputs, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, output_attentions, output_hidden_states, labels, training)
803 output_hidden_states=output_hidden_states,
--> 804 training=training,
805 )
~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
~\Anaconda3\lib\site-packages\transformers\modeling_tf_flaubert.py in call(self, inputs, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, training, output_attentions, output_hidden_states)
259 if not self.pre_norm:
--> 260 attn_outputs = self.attentions[i]([tensor, attn_mask, None, cache, head_mask[i]], training=training)
261 attn = attn_outputs[0]
~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
~\Anaconda3\lib\site-packages\transformers\modeling_tf_xlm.py in call(self, inputs, training)
140 """
--> 141 input, mask, kv, cache, head_mask, output_attentions = inputs
142 # Input is (bs, qlen, dim)
ValueError: not enough values to unpack (expected 6, got 5)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-38-dc13d8280fd1> in <module>
----> 1 model = t.get_classifier()
~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in get_classifier(self, fpath, multilabel, metrics)
997 num_labels = len(self.get_classes())
998 mname = fpath if fpath is not None else self.model_name
--> 999 model = self._load_pretrained(mname, num_labels)
1000 if multilabel:
1001 loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in _load_pretrained(self, mname, num_labels)
962 model = self.model_type.from_pretrained(mname, config=self.config, from_pt=True)
963 except:
--> 964 raise ValueError('could not load pretrained model %s using both from_pt=False and from_pt=True' % (mname))
965 else:
966 model = self.model_type.from_pretrained(mname, num_labels=num_labels)
ValueError: could not load pretrained model flaubert/flaubert_base_cased using both from_pt=False and from_pt=True
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: windows 10
- Python version: 3.8
- PyTorch version (No GPU): 1.0.0
- Tensorflow version (No GPU): 2.1.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:
Thank you a lot for your help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5606/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5605/comments | https://api.github.com/repos/huggingface/transformers/issues/5605/events | https://github.com/huggingface/transformers/issues/5605 | 653,362,662 | MDU6SXNzdWU2NTMzNjI2NjI= | 5,605 | Here maybe a bug, when we load staged checkpoint | {
"login": "hengchao0248",
"id": 17661135,
"node_id": "MDQ6VXNlcjE3NjYxMTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/17661135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hengchao0248",
"html_url": "https://github.com/hengchao0248",
"followers_url": "https://api.github.com/users/hengchao0248/followers",
"following_url": "https://api.github.com/users/hengchao0248/following{/other_user}",
"gists_url": "https://api.github.com/users/hengchao0248/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hengchao0248/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hengchao0248/subscriptions",
"organizations_url": "https://api.github.com/users/hengchao0248/orgs",
"repos_url": "https://api.github.com/users/hengchao0248/repos",
"events_url": "https://api.github.com/users/hengchao0248/events{/privacy}",
"received_events_url": "https://api.github.com/users/hengchao0248/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm also puzzled by this. The calculations here seems incorrect.",
"To me these calculations are not incorrect if we take `step` as optimization steps, however `steps_trained_in_current_epoch` is wrongly used to skip training batches without considering gradient accumulation. \r\n\r\n+1 for the proposed calculation for `steps_trained_in_current_epoch` as the number of batches to be skipped.",
"@sgugger might be interested in this.",
"There is indeed a problem, but only with `steps_trained_in_current_epoch`. The `global_step` variable represents the number of optimization steps, not the number of batches seen. The variable `num_update_steps_per_epoch` take this into account so `epochs_trained` is correct. `steps_trained_in_current_epoch` represents the number of update steps to skip but is used as the number of batches to skip, so either need to multiply it by the `gradient_accumulation_steps` (and rename it for clarity) or skip `gradient_accumulation_steps` batches before subtracting 1 to it later in the loop.\r\n\r\nThis also shows that we direly miss a test to check resuming training works with gradient accumulation. I can look into this when I have a bit of time, but will be fairly busy with the preparation for v4."
] | 1,594 | 1,605 | 1,600 | NONE | null | ERROR: type should be string, got "https://github.com/huggingface/transformers/blob/40d98ebf50c4662bcd6dce6395bbed0b2142ea52/src/transformers/trainer.py#L458\r\n\r\nI met this bug when I used the setting below:\r\n\r\nglobal_steps = 2748\r\nlen(train_dataloader) = 27484\r\ngradient_accumulation_steps = 4\r\n\r\nIn the original code, \"steps_trained_in_current_epoch\" will be 2748 ! BUT this variable should be 2748*4 = 10,992\r\n\r\nthe code I suggested is below:\r\n\r\n```\r\nepochs_trained = (self.global_step * self.args.gradient_accumulation_steps) // len(train_dataloader)\r\nsteps_trained_in_current_epoch = (self.global_step * self.args.gradient_accumulation_steps) % len(train_dataloader)\r\n```" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5605/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5605/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5604/comments | https://api.github.com/repos/huggingface/transformers/issues/5604/events | https://github.com/huggingface/transformers/issues/5604 | 653,331,917 | MDU6SXNzdWU2NTMzMzE5MTc= | 5,604 | [Benchmark] TFGPT2LMHeadModel is five times slower than GPT2LMHeadModel | {
"login": "bjourne",
"id": 142475,
"node_id": "MDQ6VXNlcjE0MjQ3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/142475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjourne",
"html_url": "https://github.com/bjourne",
"followers_url": "https://api.github.com/users/bjourne/followers",
"following_url": "https://api.github.com/users/bjourne/following{/other_user}",
"gists_url": "https://api.github.com/users/bjourne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bjourne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bjourne/subscriptions",
"organizations_url": "https://api.github.com/users/bjourne/orgs",
"repos_url": "https://api.github.com/users/bjourne/repos",
"events_url": "https://api.github.com/users/bjourne/events{/privacy}",
"received_events_url": "https://api.github.com/users/bjourne/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"That's probably because you're in eager mode in your TensorFlow script. You can read about eager mode [here](https://www.tensorflow.org/guide/eager).\r\n\r\n[Here's](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit#gid=0) a spreadsheet showcasing several model performances, you can check it out for GPT-2.",
"The script runs even slower in graph execution mode.",
"Same thing here:\r\n```python\r\nfrom time import time\r\nfrom transformers import (TFGPT2LMHeadModel, GPT2Tokenizer,\r\n GPT2LMHeadModel,\r\n pipeline)\r\n\r\nseed = \"What are you doing after you have finished working?\"\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\ngen = pipeline('text-generation',\r\n model = model,\r\n tokenizer = tokenizer)\r\nstart = time()\r\nout = gen(seed, max_length = 100, num_return_sequences = 1)\r\nprint(time() - start, out)\r\n```\r\nJust changing `GPT2LMHeadModel` to `TFGPT2LMHeadModel` makes the program run 5 times slower.",
"Oh, I see. Thanks for opening an issue, we're looking into it now.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"same question",
"with @gante's new TF generate method this should be much faster now no? :-)",
"Hi @shiyongde 👋 Yeah, we have just released a much faster TF generation. Check our blog post [here](https://huggingface.co/blog/tf-xla-generate).\r\n\r\nNote that it is not yet compatible with `pipeline` (it is in our TODO list)"
] | 1,594 | 1,661 | 1,602 | NONE | null | Here are two scripts I ran.
```python
from time import time
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
import tensorflow as tf
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
text = "What are you doing after you have finished working?"
generated = tokenizer.encode(text)
context = tf.constant([generated])
past = None
start = time()
for i in range(100):
output, past = model(context, past = past)
logits = output[0, -1, :]
tok = tf.argmax(logits)
generated.append(tok.numpy())
context = tf.expand_dims(tf.expand_dims(tok, 0), 0)
sequence = tokenizer.decode(generated)
print(time() - start, sequence)
```
and
```python
from time import time
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
text = "What are you doing after you have finished working?"
generated = tokenizer.encode(text)
context = torch.tensor([generated])
past = None
start = time()
for i in range(100):
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(time() - start, sequence)
```
On my computer with the models running on the cpu, the PyTorch version finishes in about six seconds while the TensorFlow version takes 30 seconds. So something must be wrong with the TF implementation because it shouldn't be that much slower. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5604/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5603/comments | https://api.github.com/repos/huggingface/transformers/issues/5603/events | https://github.com/huggingface/transformers/pull/5603 | 653,321,531 | MDExOlB1bGxSZXF1ZXN0NDQ2MjYyNzE4 | 5,603 | Update benchmark notebook | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | MEMBER | null | Small update | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5603/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5603",
"html_url": "https://github.com/huggingface/transformers/pull/5603",
"diff_url": "https://github.com/huggingface/transformers/pull/5603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5603.patch",
"merged_at": 1594217040000
} |
https://api.github.com/repos/huggingface/transformers/issues/5602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5602/comments | https://api.github.com/repos/huggingface/transformers/issues/5602/events | https://github.com/huggingface/transformers/issues/5602 | 653,296,621 | MDU6SXNzdWU2NTMyOTY2MjE= | 5,602 | MarianMT: "CUDA out of memory" when translating many times with the MarianMT Model | {
"login": "agiagoulas",
"id": 46862051,
"node_id": "MDQ6VXNlcjQ2ODYyMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46862051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agiagoulas",
"html_url": "https://github.com/agiagoulas",
"followers_url": "https://api.github.com/users/agiagoulas/followers",
"following_url": "https://api.github.com/users/agiagoulas/following{/other_user}",
"gists_url": "https://api.github.com/users/agiagoulas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agiagoulas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agiagoulas/subscriptions",
"organizations_url": "https://api.github.com/users/agiagoulas/orgs",
"repos_url": "https://api.github.com/users/agiagoulas/repos",
"events_url": "https://api.github.com/users/agiagoulas/events{/privacy}",
"received_events_url": "https://api.github.com/users/agiagoulas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer The Documentation page said to assign you, but I can only mention you. ",
"how big are your batches?\r\n\r\n```python\r\nsrc = 'en' # source language\r\ntrg = 'de' # target language\r\ndevice='cuda'\r\nmname = f'Helsinki-NLP/opus-mt-{src}-{trg}'\r\nmodel = MarianMTModel.from_pretrained(mname).to(device).half() # fp16 should save lots of memory\r\ntok = MarianTokenizer.from_pretrained(mname)\r\ntranslations = []\r\nfor src_text_list in chunks(data, 8): # copy paste chunks fn from run_eval.py, consider wrapping tqdm_notebook\r\n batch = tok.prepare_translation_batch(src_text_list).to(device)\r\n gen = model.generate(**batch)\r\n german: List[str] = tok.batch_decode(gen, skip_special_tokens=True)\r\n\ttranslations.extend(german)\r\n```",
"This is an example of a batch. They are all in this size.\r\nThanks in advance for your help!\r\n`\r\n['Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'Saint Bernadette Soubirous', 'What is in front of the Notre Dame Main Building?', 'a copper statue of Christ', 'The Basilica of the Sacred heart at Notre Dame is beside to which structure?', 'the Main Building', 'What is the Grotto at Notre Dame?', 'a Marian place of prayer and reflection', 'What sits on top of the Main Building at Notre Dame?', 'a golden statue of the Virgin Mary']\r\n`",
"Did my code work? Consider passing `max_length` to `prepare_translation_batch` if it doesn't.",
"@sshleifer It worked 👍 I used this fix in 2000 repetitions of the batch size at a time for a few times now and no error occured. Thank you very much for your help!"
] | 1,594 | 1,599 | 1,596 | NONE | null | # 🐛 Bug
RuntimeError('CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 4.00 GiB total capacity; 3.03 GiB already allocated; 4.72 MiB free; 3.06 GiB reserved in total by PyTorch)')
## Information
I wrote a python notebook to translate datasets using MarianMT. Therefore I wrote a function, that gets called a couple of thousand times in this translation process. The function looks like this:
```
def translate(data):
batch = tok.prepare_translation_batch(data).to('cuda')
gen = model.generate(**batch).to('cuda')
data: List[str] = tok.batch_decode(gen, skip_special_tokens=True)
return data
```
After about 1000 function calls with a size of data of about 300 words, the error occurs. I tried freeing the GPU memory with torch.cuda.empty_cache() and calling the garbage collector with gc.collect(), but nothing changes. To my understanding I need to use 'torch.no_grad()' but thats already implemented in the generate function of the model.
Model I am using (Bert, XLNet ...): MarianMT
Language I am using the model on (English to German): Helsinki-NLP/opus-mt-en-de
## To reproduce
Steps to reproduce the behavior:
1. Translate an english text using the translate function provided below a couple of thousand times on a cuda enabled device
2. The error occurs corresponding to your gpu after some time
```
src = 'en' # source language
trg = 'de' # target language
mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
model = MarianMTModel.from_pretrained(mname)
tok = MarianTokenizer.from_pretrained(mname)
model.to('cuda')
def translate(data):
batch = tok.prepare_translation_batch(data).to('cuda')
gen = model.generate(**batch).to('cuda')
data: List[str] = tok.batch_decode(gen, skip_special_tokens=True)
return data
```
## Expected behavior
No Cuda out of memory error. The Cuda memory gets cleared after each translation process is done.
## Environment info
- `transformers` version: 3.0.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Thank you in advance for your help, I´m struggling with this error for a while!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5602/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5601/comments | https://api.github.com/repos/huggingface/transformers/issues/5601/events | https://github.com/huggingface/transformers/pull/5601 | 653,281,384 | MDExOlB1bGxSZXF1ZXN0NDQ2MjMwMjY3 | 5,601 | Create README.md | {
"login": "bashartalafha",
"id": 26685171,
"node_id": "MDQ6VXNlcjI2Njg1MTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/26685171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bashartalafha",
"html_url": "https://github.com/bashartalafha",
"followers_url": "https://api.github.com/users/bashartalafha/followers",
"following_url": "https://api.github.com/users/bashartalafha/following{/other_user}",
"gists_url": "https://api.github.com/users/bashartalafha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bashartalafha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bashartalafha/subscriptions",
"organizations_url": "https://api.github.com/users/bashartalafha/orgs",
"repos_url": "https://api.github.com/users/bashartalafha/repos",
"events_url": "https://api.github.com/users/bashartalafha/events{/privacy}",
"received_events_url": "https://api.github.com/users/bashartalafha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=h1) Report\n> Merging [#5601](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f82a2a5e8e6827343322a4a9831924c5bb9bd2b2&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5601 +/- ##\n==========================================\n+ Coverage 76.69% 76.88% +0.18% \n==========================================\n Files 145 145 \n Lines 25351 25351 \n==========================================\n+ Hits 19444 19490 +46 \n+ Misses 5907 5861 -46 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=footer). Last update [f82a2a5...a3f37fa](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! image link seems broken, feel free to update in a subsequent PR."
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5601/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5601",
"html_url": "https://github.com/huggingface/transformers/pull/5601",
"diff_url": "https://github.com/huggingface/transformers/pull/5601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5601.patch",
"merged_at": 1594238869000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5600/comments | https://api.github.com/repos/huggingface/transformers/issues/5600/events | https://github.com/huggingface/transformers/issues/5600 | 653,265,333 | MDU6SXNzdWU2NTMyNjUzMzM= | 5,600 | [MarianMT{ | {
"login": "agiagoulas",
"id": 46862051,
"node_id": "MDQ6VXNlcjQ2ODYyMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46862051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agiagoulas",
"html_url": "https://github.com/agiagoulas",
"followers_url": "https://api.github.com/users/agiagoulas/followers",
"following_url": "https://api.github.com/users/agiagoulas/following{/other_user}",
"gists_url": "https://api.github.com/users/agiagoulas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agiagoulas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agiagoulas/subscriptions",
"organizations_url": "https://api.github.com/users/agiagoulas/orgs",
"repos_url": "https://api.github.com/users/agiagoulas/repos",
"events_url": "https://api.github.com/users/agiagoulas/events{/privacy}",
"received_events_url": "https://api.github.com/users/agiagoulas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5600/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/5599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5599/comments | https://api.github.com/repos/huggingface/transformers/issues/5599/events | https://github.com/huggingface/transformers/pull/5599 | 653,257,870 | MDExOlB1bGxSZXF1ZXN0NDQ2MjEwOTA3 | 5,599 | Add newly trained `calbert-tiny-uncased` | {
"login": "txus",
"id": 83234,
"node_id": "MDQ6VXNlcjgzMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/83234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/txus",
"html_url": "https://github.com/txus",
"followers_url": "https://api.github.com/users/txus/followers",
"following_url": "https://api.github.com/users/txus/following{/other_user}",
"gists_url": "https://api.github.com/users/txus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/txus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/txus/subscriptions",
"organizations_url": "https://api.github.com/users/txus/orgs",
"repos_url": "https://api.github.com/users/txus/repos",
"events_url": "https://api.github.com/users/txus/events{/privacy}",
"received_events_url": "https://api.github.com/users/txus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=h1) Report\n> Merging [#5599](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f82a2a5e8e6827343322a4a9831924c5bb9bd2b2&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5599 +/- ##\n=======================================\n Coverage 76.69% 76.69% \n=======================================\n Files 145 145 \n Lines 25351 25351 \n=======================================\n Hits 19444 19444 \n Misses 5907 5907 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=footer). Last update [f82a2a5...0a4786e](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"looks great, thanks for sharing!"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Calbert is an open-source ALBERT trained on the Catalan OSCAR dataset. This is the `tiny` version, newly trained from a completely rewrite [in this repo](https://github.com/codegram/calbert), now using SentencePiece which makes everything work much better with pipelines and the default tooling, as demonstrated in the model card. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5599/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5599",
"html_url": "https://github.com/huggingface/transformers/pull/5599",
"diff_url": "https://github.com/huggingface/transformers/pull/5599.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5599.patch",
"merged_at": 1594245292000
} |
https://api.github.com/repos/huggingface/transformers/issues/5598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5598/comments | https://api.github.com/repos/huggingface/transformers/issues/5598/events | https://github.com/huggingface/transformers/pull/5598 | 653,233,476 | MDExOlB1bGxSZXF1ZXN0NDQ2MTkxMTU1 | 5,598 | returned value from parse method of MeCab >= 1.0.0 was changed | {
"login": "gorogoroyasu",
"id": 17561419,
"node_id": "MDQ6VXNlcjE3NTYxNDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17561419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gorogoroyasu",
"html_url": "https://github.com/gorogoroyasu",
"followers_url": "https://api.github.com/users/gorogoroyasu/followers",
"following_url": "https://api.github.com/users/gorogoroyasu/following{/other_user}",
"gists_url": "https://api.github.com/users/gorogoroyasu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gorogoroyasu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gorogoroyasu/subscriptions",
"organizations_url": "https://api.github.com/users/gorogoroyasu/orgs",
"repos_url": "https://api.github.com/users/gorogoroyasu/repos",
"events_url": "https://api.github.com/users/gorogoroyasu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gorogoroyasu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=h1) Report\n> Merging [#5598](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f82a2a5e8e6827343322a4a9831924c5bb9bd2b2&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5598 +/- ##\n==========================================\n+ Coverage 76.69% 76.88% +0.18% \n==========================================\n Files 145 145 \n Lines 25351 25351 \n==========================================\n+ Hits 19444 19491 +47 \n+ Misses 5907 5860 -47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `30.48% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=footer). Last update [f82a2a5...b6b2498](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for your PR. The problem is that, even after doing this, the tokenization returned is not the same, which is the main reason we pinned mecab to <1 (with no plan to support v1 for now): since it changes the tokenization, it will break models on the Hub that were pretrained with the old tokenization.",
"Thanks for your reply and summary of the problem.\r\nI didn't realize the breaking changes of tokenizer...\r\nFor now, I'll keep using <1 versions."
] | 1,594 | 1,594 | 1,594 | NONE | null | As shown below, returned value from MeCab's parse method was changed,
we need to fix MecabTokenizer in BertJapaneseTokenizer.
# mecab-python3==1.0.0 (latest version)
```
root@713173e4bace:/# pip freeze | grep mecab
mecab-python3==1.0.0
root@713173e4bace:/# python
Python 3.8.3 (default, Jul 7 2020, 11:33:46)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import MeCab
>>> m = MeCab.Tagger()
>>> m.parse('こんにちは')
'こんにちは\tコンニチワ\tコンニチハ\t今日は\t感動詞-一般\t\t\t5\nEOS\n'
```
# mecab-python3==0.996.5 (previous version)
```
root@713173e4bace:/# pip install mecab-python3==0.996.5 --upgrade
Collecting mecab-python3==0.996.5
Downloading mecab_python3-0.996.5-cp38-cp38-manylinux2010_x86_64.whl (17.1 MB)
|████████████████████████████████| 17.1 MB 1.2 MB/s
Installing collected packages: mecab-python3
Attempting uninstall: mecab-python3
Found existing installation: mecab-python3 1.0.0
Uninstalling mecab-python3-1.0.0:
Successfully uninstalled mecab-python3-1.0.0
Successfully installed mecab-python3-0.996.5
root@713173e4bace:/# pip freeze | grep mecab
mecab-python3==0.996.5
root@713173e4bace:/# python
Python 3.8.3 (default, Jul 7 2020, 11:33:46)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import MeCab
>>> m = MeCab.Tagger()
>>> m.parse('こんにちは')
'こんにちは\t感動詞,*,*,*,*,*,こんにちは,コンニチハ,コンニチワ\nEOS\n'
```
I found other issues related to MeCab>=1.0.0 ( like #5392 ) and understood the latest version of MeCab will not be supported soon. So if this PR is too early to merge, do not hesitate to close.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5598/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5598",
"html_url": "https://github.com/huggingface/transformers/pull/5598",
"diff_url": "https://github.com/huggingface/transformers/pull/5598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5598.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5597/comments | https://api.github.com/repos/huggingface/transformers/issues/5597/events | https://github.com/huggingface/transformers/issues/5597 | 653,231,863 | MDU6SXNzdWU2NTMyMzE4NjM= | 5,597 | DPR model examples / notebook / pipeline | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
An notebook or own script in the examples folders would be pretty nice for the new DPR model, implemented by @lhoestq
Maybe an implementation to the pipeline would be cool too
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Such an notebook or script would make it easier to train an custom dpr model
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I did not read the paper and had no closer look to the code, but with some advice I could try to start an PR
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5597/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5597/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5596/comments | https://api.github.com/repos/huggingface/transformers/issues/5596/events | https://github.com/huggingface/transformers/pull/5596 | 653,168,270 | MDExOlB1bGxSZXF1ZXN0NDQ2MTM2OTUy | 5,596 | Add data_collator with attention_mask feature | {
"login": "LunaBlack",
"id": 11146719,
"node_id": "MDQ6VXNlcjExMTQ2NzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11146719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LunaBlack",
"html_url": "https://github.com/LunaBlack",
"followers_url": "https://api.github.com/users/LunaBlack/followers",
"following_url": "https://api.github.com/users/LunaBlack/following{/other_user}",
"gists_url": "https://api.github.com/users/LunaBlack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LunaBlack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LunaBlack/subscriptions",
"organizations_url": "https://api.github.com/users/LunaBlack/orgs",
"repos_url": "https://api.github.com/users/LunaBlack/repos",
"events_url": "https://api.github.com/users/LunaBlack/events{/privacy}",
"received_events_url": "https://api.github.com/users/LunaBlack/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Add a related issue: #4702",
"I agree that we should handle `attention_mask`, but I don't think adding another class `DataCollatorForMaskedLanguageModeling` is necessary. The class `DataCollatorForLanguageModeling` already handles masked language modeling, and should imo be modified to handle attention masks as well.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=h1) Report\n> Merging [#5596](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfbb98297449e09e5a2443b4ba76be52a71ec0f7&el=desc) will **increase** coverage by `0.82%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5596 +/- ##\n==========================================\n+ Coverage 76.97% 77.80% +0.82% \n==========================================\n Files 145 145 \n Lines 25317 25344 +27 \n==========================================\n+ Hits 19487 19718 +231 \n+ Misses 5830 5626 -204 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `99.25% <100.00%> (+0.15%)` | :arrow_up: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.45% <100.00%> (+0.61%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <0.00%> (+0.98%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=footer). Last update [cfbb982...cd3ae45](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik \r\nI agree!\r\nI have modified the code, and passed related unit tests. \r\nCould you please help review the code?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,602 | 1,602 | NONE | null | For models like BERT, we should mask the padding ids as invalidate to avoid attention on them. Therefore, in addition to `input_ids`, the feature `attention_mask` should also be fed to the model.
Add a data collator `DataCollatorForMaskedLanguageModeling` which will return all of above data, and add a dataset class `LineByLineTextMaskDataset` which is related with the former. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5596/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5596",
"html_url": "https://github.com/huggingface/transformers/pull/5596",
"diff_url": "https://github.com/huggingface/transformers/pull/5596.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5596.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5595/comments | https://api.github.com/repos/huggingface/transformers/issues/5595/events | https://github.com/huggingface/transformers/issues/5595 | 653,149,185 | MDU6SXNzdWU2NTMxNDkxODU= | 5,595 | transformer dataset and masked LM | {
"login": "siheming",
"id": 30624256,
"node_id": "MDQ6VXNlcjMwNjI0MjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/30624256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siheming",
"html_url": "https://github.com/siheming",
"followers_url": "https://api.github.com/users/siheming/followers",
"following_url": "https://api.github.com/users/siheming/following{/other_user}",
"gists_url": "https://api.github.com/users/siheming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siheming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siheming/subscriptions",
"organizations_url": "https://api.github.com/users/siheming/orgs",
"repos_url": "https://api.github.com/users/siheming/repos",
"events_url": "https://api.github.com/users/siheming/events{/privacy}",
"received_events_url": "https://api.github.com/users/siheming/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should I also put the questions form the stackoverflow post in this issue? Or is there any problem with this issue? Did I overlook something?",
"Posted the solution I found on github."
] | 1,594 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
I was wondering about the masked LM models and the Datasets for the transformer library.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/62757772/hugging-face-tokenizer-for-masked-lm-question | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5595/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5594/comments | https://api.github.com/repos/huggingface/transformers/issues/5594/events | https://github.com/huggingface/transformers/pull/5594 | 653,113,306 | MDExOlB1bGxSZXF1ZXN0NDQ2MDkzMzM0 | 5,594 | [Benchmark] Add benchmarks for TF Training | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=h1) Report\n> Merging [#5594](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfbb98297449e09e5a2443b4ba76be52a71ec0f7&el=desc) will **increase** coverage by `1.13%`.\n> The diff coverage is `15.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5594 +/- ##\n==========================================\n+ Coverage 76.95% 78.08% +1.13% \n==========================================\n Files 145 145 \n Lines 25317 25351 +34 \n==========================================\n+ Hits 19482 19795 +313 \n+ Misses 5835 5556 -279 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <10.81%> (-18.28%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <66.66%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <0.00%> (+0.98%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=footer). Last update [cfbb982...2e17a73](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merging. Pinging @LysandreJik for notification."
] | 1,594 | 1,594 | 1,594 | MEMBER | null | This PR adds train functions to TF benchmarks including tests.
The notebook is updated accordingly as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5594/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5594",
"html_url": "https://github.com/huggingface/transformers/pull/5594",
"diff_url": "https://github.com/huggingface/transformers/pull/5594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5594.patch",
"merged_at": 1594203070000
} |
https://api.github.com/repos/huggingface/transformers/issues/5593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5593/comments | https://api.github.com/repos/huggingface/transformers/issues/5593/events | https://github.com/huggingface/transformers/issues/5593 | 653,098,510 | MDU6SXNzdWU2NTMwOTg1MTA= | 5,593 | AttributeError: 'Tensor' object has no attribute 'ndim' | {
"login": "Imenbaa",
"id": 45403868,
"node_id": "MDQ6VXNlcjQ1NDAzODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/45403868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Imenbaa",
"html_url": "https://github.com/Imenbaa",
"followers_url": "https://api.github.com/users/Imenbaa/followers",
"following_url": "https://api.github.com/users/Imenbaa/following{/other_user}",
"gists_url": "https://api.github.com/users/Imenbaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Imenbaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Imenbaa/subscriptions",
"organizations_url": "https://api.github.com/users/Imenbaa/orgs",
"repos_url": "https://api.github.com/users/Imenbaa/repos",
"events_url": "https://api.github.com/users/Imenbaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Imenbaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Do you mind pasting the command you used to run the script? Thank you!",
"Hi @LysandreJik ,\r\nThank you for your quick reply.\r\nFirst, I want to cheer you for your amazing work in HuggingFaces Transformers.\r\nFor that error I used the command \"python run.generation.py --model_type=gpt2 --model_name_or_path=gpt2\".\r\nI doubt that the problem originates from my Pytorch version and installation. What do you think?\r\n\r\n",
"Thank you :) \r\n\r\nThis doesn't fail in my environment. Are you running on an older `transformers` version? Do you mind pasting your environment information here?\r\n\r\nJust running `transformers-cli env` in your environment should return something similar to this:\r\n\r\n```\r\n- `transformers` version: 3.0.2\r\n- Platform: Linux-5.6.16-1-MANJARO-x86_64-with-arch-Manjaro-Linux\r\n- Python version: 3.6.10\r\n- PyTorch version (GPU?): 1.5.0 (True)\r\n- Tensorflow version (GPU?): 2.2.0 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"I ran into the same issue when using pytorch19.01-py3 Nvidia container but don't get the error when using pytorch20.06-py3 Nvidia container. I can't use the latest version of Pytorch because it doesn't support Cuda driver 10.0 - only Cuda driver version 11.0. The Cuda driver I have is 10.0 version and there are limitations on Nvidia DGX to upgrade to Cuda driver 11.0. \r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis)\r\n 506 # at-least2d\r\n--> 507 if tensor.ndim > 2:\r\n 508 tensor = tensor.squeeze(0)\r\n\r\nAttributeError: 'Tensor' object has no attribute 'ndim'\r\n\r\nDuring handling of the above exception, another exception occurred:",
"This is the command I am using that is triggering the error:\r\nencoding = self.tokenizer.encode_plus(\r\n note,\r\n add_special_tokens=True,\r\n max_length=self.max_len,\r\n return_token_type_ids=True,\r\n truncation=True,\r\n pad_to_max_length=True,\r\n return_attention_mask=True,\r\n return_tensors='pt',\r\n ) ",
"The error went away when I switched off the \"return_tensors='pt' \" argument",
"I'm also running into the same error @LysandreJik . Here's my `transformers-cli env`. \r\n`- transformers version: 3.0.2`\r\n`- Platform: Darwin-19.5.0-x86_64-i386-64bit`\r\n`- Python version: 3.7.3`\r\n`- PyTorch version (GPU?): 1.0.1 (False)`\r\n`- Tensorflow version (GPU?): not installed (NA)`\r\n`- Using GPU in script?: <fill in>`\r\n`- Using distributed or parallel set-up in script?: <fill in>`\r\nRunning something trivial like `python -c \"from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))\"` reproduces the error which leads me to believe I'm missing something obvious here. Here's the last error:\r\n`\"Unable to create tensor, you should probably activate truncation and/or padding \"\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.`\r\n",
"I don't know if you are willing to upgrade, but your issue is that `Tensor.ndim` was introduced in a later version. If you uninstall pytorch and follow these instructions https://pytorch.org/get-started/locally/ you should be all set.",
"hi guys i have the same problem i'm trying to load a model and integrate it in django\r\nany solution ?",
"same error ! but when i run it on kaggle it works",
"Easiest solution is upgrading torch."
] | 1,594 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
Hello,
when I run the run_generation.py file I got this error:
Traceback (most recent call last):
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 507, in convert_to_tensors
if tensor.ndim > 2:
AttributeError: 'Tensor' object has no attribute 'ndim'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_generation.py", line 274, in <module>
main()
File "run_generation.py", line 227, in main
encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 1425, in encode
**kwargs,
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 1737, in encode_plus
**kwargs,
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils.py", line 473, in _encode_plus
verbose=verbose,
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 2098, in prepare_for_model
encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 159, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 515, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
Any suggestions?
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5593/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5592/comments | https://api.github.com/repos/huggingface/transformers/issues/5592/events | https://github.com/huggingface/transformers/pull/5592 | 653,023,382 | MDExOlB1bGxSZXF1ZXN0NDQ2MDE5MTEz | 5,592 | Allow to set Adam beta1, beta2 in TrainingArgs | {
"login": "gonglinyuan",
"id": 9744170,
"node_id": "MDQ6VXNlcjk3NDQxNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9744170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gonglinyuan",
"html_url": "https://github.com/gonglinyuan",
"followers_url": "https://api.github.com/users/gonglinyuan/followers",
"following_url": "https://api.github.com/users/gonglinyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/gonglinyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gonglinyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gonglinyuan/subscriptions",
"organizations_url": "https://api.github.com/users/gonglinyuan/orgs",
"repos_url": "https://api.github.com/users/gonglinyuan/repos",
"events_url": "https://api.github.com/users/gonglinyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/gonglinyuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=h1) Report\n> Merging [#5592](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfbb98297449e09e5a2443b4ba76be52a71ec0f7&el=desc) will **increase** coverage by `1.21%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5592 +/- ##\n==========================================\n+ Coverage 76.95% 78.16% +1.21% \n==========================================\n Files 145 145 \n Lines 25317 25319 +2 \n==========================================\n+ Hits 19482 19790 +308 \n+ Misses 5835 5529 -306 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <ø> (ø)` | |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <100.00%> (ø)` | |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `78.00% <100.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=footer). Last update [cfbb982...9bdb1a5](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM! Really nice!!!",
"I'm fine with this"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | In some models, `beta1` and `beta2` in Adam optimizer are set to be different from the default values `(0.9, 0.999)`. For example, RoBERTa set `beta2 = 0.98`. It is thereby necessary to add `beta1` and `beta2` in `TrainingArgs` if the user wants to fine-tune RoBERTa and other similar models. Also, another hyperparameter of Adam, `adam_epsilon`, has already been added to `TrainingArgs`. For the purpose of consistency, it would be better of `adam_beta1` and `adam_beta2` are also added. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5592/reactions",
"total_count": 5,
"+1": 1,
"-1": 0,
"laugh": 1,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5592/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5592",
"html_url": "https://github.com/huggingface/transformers/pull/5592",
"diff_url": "https://github.com/huggingface/transformers/pull/5592.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5592.patch",
"merged_at": 1595842298000
} |
https://api.github.com/repos/huggingface/transformers/issues/5591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5591/comments | https://api.github.com/repos/huggingface/transformers/issues/5591/events | https://github.com/huggingface/transformers/issues/5591 | 652,971,055 | MDU6SXNzdWU2NTI5NzEwNTU= | 5,591 | KeyError Issue in Question answering | {
"login": "siddBanPsu",
"id": 9299962,
"node_id": "MDQ6VXNlcjkyOTk5NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9299962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddBanPsu",
"html_url": "https://github.com/siddBanPsu",
"followers_url": "https://api.github.com/users/siddBanPsu/followers",
"following_url": "https://api.github.com/users/siddBanPsu/following{/other_user}",
"gists_url": "https://api.github.com/users/siddBanPsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddBanPsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddBanPsu/subscriptions",
"organizations_url": "https://api.github.com/users/siddBanPsu/orgs",
"repos_url": "https://api.github.com/users/siddBanPsu/repos",
"events_url": "https://api.github.com/users/siddBanPsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddBanPsu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe this issue has been recurring in other versions due to tokenization related indexing issue. \r\n\r\nI did the following to get the span from the answer. \r\n\r\n```\r\nMODEL_LOC = \"distilbert-base-uncased-distilled-squad\"\r\nTOKENIZER_LOC = \"distilbert-base-uncased-distilled-squad\"\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(MODEL_LOC)\r\ntokenizer = AutoTokenizer.from_pretrained(TOKENIZER_LOC)\r\n\r\ndef get_answer(question, contexts: List[str]):\r\n q_c_pairs = [(question, c) for c in contexts]\r\n encoding = tokenizer.batch_encode_plus(q_c_pairs, \r\n max_length=256,\r\n pad_to_max_length=True,\r\n truncation=True,\r\n )\r\n q_encoding = tokenizer.encode_plus(question, max_length=256, truncation=True,)\r\n q_len = len(q_encoding[\"input_ids\"])\r\n answer_encoding = torch.tensor(encoding[\"input_ids\"])[:, q_len:]\r\n model_input_names = tokenizer.model_input_names + [\"input_ids\"]\r\n fw_args = {k: torch.tensor(encoding[k]) for k in model_input_names}\r\n with torch.no_grad():\r\n start_scores, end_scores = model(**fw_args)\r\n start_scores_prob = F.softmax(start_scores, dim=1)\r\n start_scores_prob = start_scores_prob[:, q_len:]\r\n start_max, start_max_index = torch.max(start_scores_prob, dim=1)\r\n end_scores_prob = F.softmax(end_scores, dim=1)\r\n end_scores_prob = end_scores_prob[:, q_len:]\r\n # Making sure only indices beyond start_index are considered. Forcibly make ones before start_idx as 0\r\n end_scores_dummy = torch.ones_like(end_scores_prob)\r\n end_scores_prob = (torch.arange(end_scores_dummy.size(1)) > start_max_index.unsqueeze(1)) * 1.0 * end_scores_prob\r\n end_max, end_max_index = torch.max(end_scores_prob, dim=1)\r\n probs = (start_max * end_max)\r\n for i, (s, e, p) in enumerate(zip(start_max_index, end_max_index, probs)):\r\n token_ids = answer_encoding[i][s:e + 1]\r\n tokens = tokenizer.convert_ids_to_tokens(token_ids, skip_special_tokens=True)\r\n ans_string = tokenizer.convert_tokens_to_string(tokens)\r\n print(s, e, ans_string, p.cpu().numpy())\r\n\r\nquestion = \"What is the incubation period\"\r\ncontext = \" \".join([\"The incubation period is around 5 days with a maximum of 12-13 days\"] * 20)\r\n\r\nget_answer(question, [context])\r\n```\r\nI get the desired output and seems to work fine for many cases I tried and with different lengths.\r\n",
"Hello! This seems to have been patched on `master`, it fails for me on `v3.0.2` but not on `master`.\r\n\r\nThe fix will be in the next version, in the meantime you can install from source:\r\n\r\n```py\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"Works fine now. Thanks."
] | 1,594 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distillbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Just run this code.
```
from transformers import pipeline
MODEL_LOC = "distilbert-base-uncased-distilled-squad"
TOKENIZER_LOC = "distilbert-base-uncased-distilled-squad"
qa = pipeline(
"question-answering",
model=MODEL_LOC,
tokenizer=TOKENIZER_LOC
)
context = " ".join(["The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day"]*10)
qa({"question": "incubation period?", "context": context})
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should just provide a json output. If I change *10 to *5, it works.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: OS X
- Python version: 3.6
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5590/comments | https://api.github.com/repos/huggingface/transformers/issues/5590/events | https://github.com/huggingface/transformers/issues/5590 | 652,916,593 | MDU6SXNzdWU2NTI5MTY1OTM= | 5,590 | HF Trainer Segmentation Fault | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Could you paste here the result of `pip list` in your environment ?",
"```\r\nabsl-py 0.9.0\r\napex 0.1\r\nastor 0.8.1\r\nastunparse 1.6.3\r\nbackcall 0.1.0\r\nbeautifulsoup4 4.9.1\r\nblis 0.4.1\r\nBottleneck 1.3.2\r\ncachetools 4.1.0\r\ncatalogue 1.0.0\r\ncertifi 2020.6.20\r\nchardet 3.0.4\r\nclick 7.1.2\r\ncycler 0.10.0\r\ncymem 2.0.3\r\nCython 0.29.20\r\ndecorator 4.4.2\r\nfastai 1.0.61\r\nfastprogress 0.2.3\r\nfilelock 3.0.12\r\nfire 0.3.1\r\nfuture 0.18.2\r\ngast 0.2.2\r\ngluonnlp 0.9.1\r\ngoogle-auth 1.18.0\r\ngoogle-auth-oauthlib 0.4.1\r\ngoogle-pasta 0.2.0\r\ngraphviz 0.8.4\r\ngrpcio 1.29.0\r\nh5py 2.10.0\r\nidna 2.8\r\nimportlib-metadata 1.6.1\r\nipython 7.14.0\r\nipython-genutils 0.2.0\r\njedi 0.17.0\r\njoblib 0.15.1\r\nKeras-Applications 1.0.8\r\nKeras-Preprocessing 1.1.2\r\nkiwisolver 1.2.0\r\nkobert-transformers 0.4.1\r\nkogpt2 0.1.1\r\nkss 1.3.1\r\nMarkdown 3.2.2\r\nmatplotlib 3.2.2\r\nmecab-python3 1.0.0\r\nmurmurhash 1.0.2\r\nmxnet 1.6.0\r\nnatto 0.1.7\r\nnumexpr 2.7.1\r\nnumpy 1.19.0\r\nnvidia-ml-py3 7.352.0\r\noauthlib 3.1.0\r\nopt-einsum 3.2.1\r\npackaging 20.4\r\npandas 1.0.5\r\nparso 0.7.0\r\npdf2image 1.9.0\r\npexpect 4.8.0\r\npickleshare 0.7.5\r\nPillow 6.2.0\r\npip 20.1.1\r\nplac 1.1.3\r\npreshed 3.0.2\r\nprompt-toolkit 3.0.5\r\nprotobuf 3.12.2\r\npsutil 5.7.0\r\nptyprocess 0.6.0\r\npyasn1 0.4.8\r\npyasn1-modules 0.2.8\r\nPygments 2.6.1\r\npyparsing 2.4.7\r\npytesseract 0.2.7\r\npython-dateutil 2.8.1\r\npytz 2020.1\r\nPyYAML 5.3.1\r\nregex 2017.4.5\r\nrequests 2.21.0\r\nrequests-oauthlib 1.3.0\r\nrsa 4.6\r\nsacremoses 0.0.43\r\nscikit-learn 0.23.1\r\nscipy 1.4.1\r\nsentencepiece 0.1.91\r\nsetuptools 41.2.0\r\nsix 1.14.0\r\nsoupsieve 2.0.1\r\nsoynlp 0.0.493\r\nspacy 2.3.0\r\nsrsly 1.0.2\r\ntensorboard 1.15.0\r\ntensorboard-plugin-wit 1.6.0.post3\r\ntensorflow 1.15.0\r\ntensorflow-estimator 1.15.1\r\ntermcolor 1.1.0\r\nthinc 7.4.1\r\nthreadpoolctl 2.1.0\r\ntokenizers 0.7.0\r\ntorch 1.5.1+cu101\r\ntorchvision 0.6.1+cu101\r\ntqdm 4.46.1\r\ntraitlets 4.3.3\r\ntransformers 2.11.0\r\nurllib3 1.24.3\r\nwasabi 0.7.0\r\nwcwidth 0.1.9\r\nWerkzeug 1.0.1\r\nwheel 0.34.2\r\nwrapt 1.12.1\r\nzipp 3.1.0\r\n```\r\n\r\n* Result was the same with\r\n```\r\ntokenizers 0.8.1rc1\r\ntransformers 3.0.2\r\n```",
"Is there anything else I should post?",
"Bumping @sgugger to analyze this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,602 | 1,602 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2-medium & large
Language I am using the model on (English, Chinese ...): Korean (with custom trained tokenizer)
The problem arises when using:
* [ O ] the official example scripts: (give details below)
https://huggingface.co/blog/how-to-train
* [ O ] my own modified scripts: (give details below)
```
from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling, LineByLineTextDataset
from transformers import GPT2Config, GPT2LMHeadModel, GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("./data/TOKEN")
config = GPT2Config.from_pretrained('gpt2-medium')
model = GPT2LMHeadModel(config=config)
tokenizer = GPT2TokenizerFast.from_pretrained("./data/TOKEN", model_max_length=1024)
print('loading dataset...')
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./data/kowiki.txt",
block_size=512,
)
training_args = TrainingArguments(
output_dir='./m', # output directory
num_train_epochs=1, # total # of training epochs
per_device_train_batch_size=1, # batch size per device during training - the higher the better, but may OOM
per_device_eval_batch_size=1, # batch size for evaluation
logging_dir='./logs', # directory for storing logs
save_steps=10000,
do_train=True
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset, # training dataset
)
faulthandler.enable()
trainer.train()
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ O ] my own task or dataset: (give details below)
Text generation with prompt (trained on wiki & novel)
## To reproduce
Steps to reproduce the behavior:
1. Modify path to data file
2. Use any file(tested with Korean - UTF8)
3. Use any tokenizer(tested with self & GPT2 tokenizers)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
### Error message
```
loading dataset...
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Fatal Python error: Segmentation fault | 0/99996 [00:00<?, ?it/s]
Thread 0x00007f872dfff700 (most recent call first):
File "/opt/conda/lib/python3.6/threading.py", line 299 in wait
File "/opt/conda/lib/python3.6/threading.py", line 551 in wait
File "/opt/conda/lib/python3.6/site-packages/tqdm/_monitor.py", line 69 in run
File "/opt/conda/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/opt/conda/lib/python3.6/threading.py", line 884 in _bootstrap
Thread 0x00007f8736bb5700 (most recent call first):
File "/opt/conda/lib/python3.6/threading.py", line 299 in wait
File "/opt/conda/lib/python3.6/queue.py", line 173 in get
File "/opt/conda/lib/python3.6/site-packages/tensorboard/summary/writer/event_file_writer.py", line 205 in run
File "/opt/conda/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/opt/conda/lib/python3.6/threading.py", line 884 in _bootstrap
Current thread 0x00007f88273e7740 (most recent call first):
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py", line 39 in broadcast_coalesced
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 21 in forward
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 71 in _broadcast_coalesced_reshape
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 88 in replicate
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 159 in replicate
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 154 in forward
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577 in __call__
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 622 in _training_step
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 499 in train
File "trainer.py", line 34 in <module>
Segmentation fault (core dumped)
```
## Expected behavior
Process through training(as normal)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.4.0-178-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0a0+9907a3e (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Planning to (don't see any flags!)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5590/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5590/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5589/comments | https://api.github.com/repos/huggingface/transformers/issues/5589/events | https://github.com/huggingface/transformers/issues/5589 | 652,823,895 | MDU6SXNzdWU2NTI4MjM4OTU= | 5,589 | Datasets & collators for NER | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | CONTRIBUTOR | null | # 🚀 Feature request
Is there a plan to add a dataset subclass and loader/collator for token classification tasks in the CONLL format? Or has that been deliberately avoided for some reason.
## Motivation
We have TextDatasets and DataCollatorForLanguageModeling. Can we have something similar for token classification tasks with some CONLL format input text file? That way we can just point to a file & tokenizer and generate a dataset like the language modeling tasks.
## Your contribution
I have a working version of this that I use in my projects. I can package and share it if it would be useful to add directly to transformers library. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5589/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5588/comments | https://api.github.com/repos/huggingface/transformers/issues/5588/events | https://github.com/huggingface/transformers/issues/5588 | 652,810,906 | MDU6SXNzdWU2NTI4MTA5MDY= | 5,588 | [Some weights or buffers of the PyTorch model TFGPT2LMHeadModel were not initialized] convert GPT2 pytorch to tensorflow model | {
"login": "gyin94",
"id": 67664443,
"node_id": "MDQ6VXNlcjY3NjY0NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyin94",
"html_url": "https://github.com/gyin94",
"followers_url": "https://api.github.com/users/gyin94/followers",
"following_url": "https://api.github.com/users/gyin94/following{/other_user}",
"gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyin94/subscriptions",
"organizations_url": "https://api.github.com/users/gyin94/orgs",
"repos_url": "https://api.github.com/users/gyin94/repos",
"events_url": "https://api.github.com/users/gyin94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyin94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The way you converted your model is the recommended way :ok_hand: \r\n\r\nThe warning is irrelevant to this conversion, we should try to make that clearer.",
"The warning is very confusing and can be fixed by pull request #6604.",
"The warning is very confusing and can be fixed by new pull request #6623"
] | 1,594 | 1,597 | 1,594 | NONE | null | # ❓ Questions & Help
How can we convert gpt2 fine tuned model from run_language_model.py from pytorch to tensorflow model? I came into the following warning when importing pytorch model GPT2LMHeadModel into TFGPT2LMHeadModel.
```
WARNING:transformers.modeling_tf_pytorch_utils:Some weights or buffers of the PyTorch model TFGPT2LMHeadModel were not initialized from the TF 2.0 model and are newly initialized: ['transformer.h.2.attn.bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.4.attn.bias', 'transformer.h.3.attn.bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.5.attn.masked_bias', 'transformer.h.1.attn.bias', 'transformer.h.0.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.1.attn.masked_bias', 'lm_head.weight', 'transformer.h.2.attn.masked_bias']
```
Or do we have any tutorial to train a GPT2 language model by TFTrainer?
## Details
<!-- Description of your issue -->
Reproduce
```
from transformers import *
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
model.save_pretrained("./gpt2_pt")
model = TFGPT2LMHeadModel.from_pretrained("./gpt2_pt", from_pt=True)
```
However we won't see the above warning if we use
```
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
```
If we already use `run_language_model.py` to train a GPT2LMHeadModel model and import it into TFGPT2LMHeadModel, can we safely use the converted model even if we see the warning? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5587/comments | https://api.github.com/repos/huggingface/transformers/issues/5587/events | https://github.com/huggingface/transformers/issues/5587 | 652,801,008 | MDU6SXNzdWU2NTI4MDEwMDg= | 5,587 | Difference between AutoTokenizer.from_pretrained and BertTokenizer.from_pretrained | {
"login": "rxlian",
"id": 35382484,
"node_id": "MDQ6VXNlcjM1MzgyNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35382484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rxlian",
"html_url": "https://github.com/rxlian",
"followers_url": "https://api.github.com/users/rxlian/followers",
"following_url": "https://api.github.com/users/rxlian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxlian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rxlian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxlian/subscriptions",
"organizations_url": "https://api.github.com/users/rxlian/orgs",
"repos_url": "https://api.github.com/users/rxlian/repos",
"events_url": "https://api.github.com/users/rxlian/events{/privacy}",
"received_events_url": "https://api.github.com/users/rxlian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The [documentation](https://huggingface.co/transformers/model_doc/auto.html) mentions:\r\n\r\n> In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you are supplying to the from_pretrained method.\r\n> \r\n> AutoClasses are here to do this job for you so that you automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary:\r\n> \r\n> Instantiating one of AutoModel, AutoConfig and AutoTokenizer will directly create a class of the relevant architecture (ex: model = AutoModel.from_pretrained('bert-base-cased') will create a instance of BertModel).\r\n\r\nSo if the string with which you're calling `from_pretrained` is a BERT checkpoint (like `bert-base-uncased`), then this:\r\n\r\n```py\r\nAutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n```\r\nis the same as this:\r\n```py\r\nBertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n```\r\n\r\nHowever, `Auto*` are more flexible as you can specify any checkpoint and the correct model will be loaded, e.g.:\r\n\r\n```py\r\nAutoTokenizer.from_pretrained(\"gpt2\") # works and returns the correct GPT2Tokenizer instance\r\nBertTokenizer.from_pretrained(\"gpt2\") # fails\r\n```\r\n\r\nClosing this for now, let me know if you have other questions.",
"@LysandreJik \r\n\r\nI'm trying to recreate the \r\n```train_new_from_iterator``` \r\nmethod from the \r\n```class PreTrainedTokenizerFast(PreTrainedTokenizerBase)```\r\nclass\r\n\r\nBut the class that we have for training is different\r\n\r\n```\r\nauto = transformers.AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\nprint(type(auto))\r\n<class 'transformers.models.bert.tokenization_bert_fast.BertTokenizerFast'>\r\n```\r\n\r\nWhat is the correct way to do so?",
"very nice Infromation AutoTokenizer",
"tanks a lot",
"Thanks!"
] | 1,594 | 1,691 | 1,594 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
While loading pretrained BERT model, what's the difference between AutoTokenizer.from_pretrained and BertTokenizer.from_pretrained? I'm very new to transformers and still confused about some basic things.
Thanks.
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5587/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5586/comments | https://api.github.com/repos/huggingface/transformers/issues/5586/events | https://github.com/huggingface/transformers/issues/5586 | 652,675,400 | MDU6SXNzdWU2NTI2NzU0MDA= | 5,586 | GPT2 past usage | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @cronoik, \r\n\r\nThanks for your issue. It does not really surprise me that the loss is different. \r\nIn the first case the following loss is calculated:\r\n\r\nloss = CrossEntropy(`input_ids`: \"I like sitting in my new chair and {} about\" vs. `labels`: \"like sitting in my new chair and {} about life\").\r\n\r\nwhere as in the second case the following loss is calculated:\r\n\r\nloss = CrossEntropy(`input_ids`: \"{} about\" vs. `labels`: \"about life\").\r\n\r\nThis is simplied - in reality the loss between the tokens of those words are calculated. \r\nThe important part to note here is that 1) `past` should not be used for training. It should be used to speed up inference.\r\n2) When using `past` only the output embeddings of the `input_ids` (in your case for \"{} about life\") are calculated and not also for the \"cached\" past input_ids.\r\n\r\nHope this answers your question",
"@patrickvonplaten Thank you a lot for your answer.",
"Hello! I have a question for gpt-2 lmhead model's input 'past_key_values'\r\nI want to use this option for model.generate module but there is an error if I use this option by specifying **model_specific_kwargs={'past':past} in model.generate's inputs... dimension error...\r\nwhat should I do for using this option for generation...?",
"Hey @chaeyoon-jang,\r\n\r\nCould you open a new issue for this?"
] | 1,594 | 1,652 | 1,594 | CONTRIBUTOR | null | Hello everyone,
I tried to answer this [stackoverflow question](https://stackoverflow.com/questions/62703391/estimate-token-probability-logits-given-a-sentence-without-computing-the-entire) and stumbled about a strange beheaviour I can't explain.
The following code will calculate the loss for a sentence with different single words injected:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
def score(sentence):
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
loss = model(tensor_input, labels=tensor_input)
return -loss[0].item()
candidates = ["watch", "run", "think", "apple", "light"]
sent_template = "I like sitting in my new chair and {} about life"
print({candidate: score(sent_template.format(candidate)) for candidate in candidates})
```
Output:
```
{'watch': -5.406847953796387, 'run': -5.533411502838135, 'think': -4.525279521942139, 'apple': -6.158637046813965, 'light': -5.835141658782959}
```
Now I wanted to use the past parameter according to [documentation](https://huggingface.co/transformers/quickstart.html#using-the-past) and expected the same result:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
past = "I like sitting in my new chair and"
past_tokenize_input = tokenizer.tokenize(past)
past_tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(past_tokenize_input)])
_, _, past = model(past_tensor_input, labels=past_tensor_input)
def score(sentence, past):
tokenize_input = tokenizer.tokenize(sentence, )
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
loss = model(tensor_input, labels=tensor_input, past=past)
return -loss[0].item()
candidates = ["watch", "run", "think", "apple", "light"]
sent_template = " {} about life"
print({candidate: score(sent_template.format(candidate), past) for candidate in candidates})
```
but the loss is different:
```
{'watch': -7.811002731323242, 'run': -6.370519638061523, 'think': -3.460831642150879, 'apple': -9.08120346069336, 'light': -8.28120231628418}
```
Is this the intended behaviour or am I doing something wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5586/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5585/comments | https://api.github.com/repos/huggingface/transformers/issues/5585/events | https://github.com/huggingface/transformers/pull/5585 | 652,661,344 | MDExOlB1bGxSZXF1ZXN0NDQ1Njk1MTk2 | 5,585 | Update question template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=h1) Report\n> Merging [#5585](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6eab53058015483e9cbcbfee4bf900c3a8ab772&el=desc) will **increase** coverage by `0.80%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5585 +/- ##\n==========================================\n+ Coverage 76.96% 77.77% +0.80% \n==========================================\n Files 145 145 \n Lines 25317 25317 \n==========================================\n+ Hits 19486 19690 +204 \n+ Misses 5831 5627 -204 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.08% <0.00%> (-14.71%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.62% <0.00%> (-2.54%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <0.00%> (+0.98%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=footer). Last update [d6eab53...ad24260](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | COLLABORATOR | null | Point people to the forum. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5585",
"html_url": "https://github.com/huggingface/transformers/pull/5585",
"diff_url": "https://github.com/huggingface/transformers/pull/5585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5585.patch",
"merged_at": 1594212396000
} |
https://api.github.com/repos/huggingface/transformers/issues/5584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5584/comments | https://api.github.com/repos/huggingface/transformers/issues/5584/events | https://github.com/huggingface/transformers/issues/5584 | 652,643,644 | MDU6SXNzdWU2NTI2NDM2NDQ= | 5,584 | On running finetune.py for seq2seq, the following error comes up: optimizer_step() got an unexpected keyword argument 'using_native_amp' | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Same issue here on 0.8.4. Downgrading to 0.8.1 works for me.\r\n\r\nRelated to recent pytorch-lightning commits for Apex: https://github.com/PyTorchLightning/pytorch-lightning/commit/0a092f66836912a714804c5103f03c7929ebf774",
"however wandb is not working with 0.8.1",
"Getting the same issue today with 0.8.4. Was working fine last week -- not sure what changed :-( -- will try to downgrade Lightning and see what happens.\r\n\r\nUpdate: downgrading to Lightning 0.7.5 -- something is running (training now), but getting bunch of warnings as well. Any idea of the expected compatibility requirements @sshleifer ? Can also try 0.8.1 -- or take a look at any related issues.\r\n\r\nI'm running HF Transformers from source, sync'ed today.",
"0.8.1 is the current version that I am running. @marton-avrios, wandb works for me in that version. Want to post a traceback in a new issue with your error?",
"@sshleifer 0.8.1 works well for me as well. And no warnings, like the older 0.7.x. Seems that 0.8.4 is what you get by default via `pip`. Hopefully that gets fixed on one end or the other, but 0.8.1 good for me. Should have replied sooner...",
"The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.\r\n\r\nThis can be fixed in the demo scripts by adding the parameters to the overriding function.",
"> 0.8.1 is the current version that I am running. @marton-avrios, wandb works for me in that version. Want to post a traceback in a new issue with your error?\r\n\r\nSee #5739 ",
"> The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.\r\n> \r\n> This can be fixed in the demo scripts by adding the parameters to the overriding function.\r\n\r\nThank you. This helped me out.",
"> > The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.\r\n> > This can be fixed in the demo scripts by adding the parameters to the overriding function.\r\n> \r\n> Thank you. This helped me out.\r\n\r\nHow did you solve it?",
"> > > The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.\r\n> > > This can be fixed in the demo scripts by adding the parameters to the overriding function.\r\n> > \r\n> > \r\n> > Thank you. This helped me out.\r\n> \r\n> How did you solve it?\r\n\r\nAre you facing this issue ?",
"I faced the same issue after upgrading to pytorch-lightning to 0.9.0. The solution above, adding parameters works. In my case using the huggingface lightning_base.py,\r\nI add using_native_amp=None in `def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None,using_native_amp=None):`\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,605 | 1,605 | NONE | null | File "finetune.py", line 344, in <module>
main(args)
File "finetune.py", line 322, in main
logger=logger,
File "/content/drive/My Drive/Colab Notebooks/transformers/examples/lightning_base.py", line 336, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 979, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 185, in single_gpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1156, in run_pretrain_routine
self.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 657, in run_training_batch
grad_norm_dic = self.run_batch_backward_pass(split_batch, batch_idx, opt_idx, optimizer)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 709, in run_batch_backward_pass
self.call_optimizer_step(optimizer, opt_idx, batch_idx, split_batch)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 747, in call_optimizer_step
using_native_amp=native_amp)
TypeError: optimizer_step() got an unexpected keyword argument 'using_native_amp'
I am using pytorch-lightning==0.8.1.
Can I get some help.
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5584/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5583/comments | https://api.github.com/repos/huggingface/transformers/issues/5583/events | https://github.com/huggingface/transformers/pull/5583 | 652,544,887 | MDExOlB1bGxSZXF1ZXN0NDQ1NTkzMDY2 | 5,583 | Test XLA examples | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=h1) Report\n> Merging [#5583](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33e43edddcab60217027dcf7f6570eead1195083&el=desc) will **increase** coverage by `1.52%`.\n> The diff coverage is `40.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5583 +/- ##\n==========================================\n+ Coverage 76.31% 77.83% +1.52% \n==========================================\n Files 145 145 \n Lines 25049 25053 +4 \n==========================================\n+ Hits 19116 19500 +384 \n+ Misses 5933 5553 -380 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `76.47% <40.00%> (-4.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.09% <0.00%> (-1.03%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=footer). Last update [33e43ed...8f84df6](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | Add a script to test examples on XLA. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5583/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5583/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5583",
"html_url": "https://github.com/huggingface/transformers/pull/5583",
"diff_url": "https://github.com/huggingface/transformers/pull/5583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5583.patch",
"merged_at": 1594300760000
} |
https://api.github.com/repos/huggingface/transformers/issues/5582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5582/comments | https://api.github.com/repos/huggingface/transformers/issues/5582/events | https://github.com/huggingface/transformers/pull/5582 | 652,527,771 | MDExOlB1bGxSZXF1ZXN0NDQ1NTc5MDEz | 5,582 | Rename files | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Mmm, guess this change is not possible as long as the file does not support evaluation as well."
] | 1,594 | 1,651 | 1,594 | COLLABORATOR | null | As discussed in #4829 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5582/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5582",
"html_url": "https://github.com/huggingface/transformers/pull/5582",
"diff_url": "https://github.com/huggingface/transformers/pull/5582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5582.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5581/comments | https://api.github.com/repos/huggingface/transformers/issues/5581/events | https://github.com/huggingface/transformers/pull/5581 | 652,499,217 | MDExOlB1bGxSZXF1ZXN0NDQ1NTU1NjQ0 | 5,581 | [mbart] prepare_translation_batch passes **kwargs to allow DeprecationWarning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Since we deleted the old `pad_to_max_length` kwarg, we want to give the user a deprecation warning if it is passed. By passing all `**kwargs` to tokenizer.__call__, all improper parameter usage will cause appropriate warnings/errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5581/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5581",
"html_url": "https://github.com/huggingface/transformers/pull/5581",
"diff_url": "https://github.com/huggingface/transformers/pull/5581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5581.patch",
"merged_at": 1594143965000
} |
https://api.github.com/repos/huggingface/transformers/issues/5580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5580/comments | https://api.github.com/repos/huggingface/transformers/issues/5580/events | https://github.com/huggingface/transformers/issues/5580 | 652,469,289 | MDU6SXNzdWU2NTI0NjkyODk= | 5,580 | TypeError: 'BertTokenizer' object is not callable | {
"login": "axhiao",
"id": 6879331,
"node_id": "MDQ6VXNlcjY4NzkzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6879331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/axhiao",
"html_url": "https://github.com/axhiao",
"followers_url": "https://api.github.com/users/axhiao/followers",
"following_url": "https://api.github.com/users/axhiao/following{/other_user}",
"gists_url": "https://api.github.com/users/axhiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/axhiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/axhiao/subscriptions",
"organizations_url": "https://api.github.com/users/axhiao/orgs",
"repos_url": "https://api.github.com/users/axhiao/repos",
"events_url": "https://api.github.com/users/axhiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/axhiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! It seems you're running on an older `transformers` version. The `__call__` was implemented only in version v3.0.0+. Do you mind pasting your environment information here?\r\n\r\nYou should have a dropdown in the documentation with the different versions, so that you can select the documentation that works for your version.",
"yes, you are right. I got the old version because I had the [mmf ](https://github.com/facebookresearch/mmf) framework depending on transformers==2.0.3. So when I run pip install transformers, it just did nothing due to the existence of the old version. Thank you!",
"Thank you a lot!",
"So we have to update the transformer to the latest version, and it would work? Is that the case "
] | 1,594 | 1,690 | 1,594 | NONE | null | ```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
sequence_a = "HuggingFace is based in NYC"
sequence_b = "Where is HuggingFace based?"
encoded_dict = tokenizer(sequence_a, sequence_b)
```
This will produce error 'BertTokenizer' object is not callable. Maybe I should call `tokenizer.tokenize()`? But I think the [doc](https://huggingface.co/transformers/glossary.html#token-type-ids) should also be updated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5580/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5579/comments | https://api.github.com/repos/huggingface/transformers/issues/5579/events | https://github.com/huggingface/transformers/issues/5579 | 652,448,887 | MDU6SXNzdWU2NTI0NDg4ODc= | 5,579 | OSError: Model name 'facebook/bart-large-cnn' was not found in tokenizers model name list | {
"login": "WangHexie",
"id": 31768052,
"node_id": "MDQ6VXNlcjMxNzY4MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WangHexie",
"html_url": "https://github.com/WangHexie",
"followers_url": "https://api.github.com/users/WangHexie/followers",
"following_url": "https://api.github.com/users/WangHexie/following{/other_user}",
"gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions",
"organizations_url": "https://api.github.com/users/WangHexie/orgs",
"repos_url": "https://api.github.com/users/WangHexie/repos",
"events_url": "https://api.github.com/users/WangHexie/events{/privacy}",
"received_events_url": "https://api.github.com/users/WangHexie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you get this solved? I cannot load in t5-large.",
"> Did you get this solved? I cannot load in t5-large.\r\n\r\nsolved simply by reactivating the environment and restarting the program. I installed old version of transformers, after upgrading transformers, I forgot to deactivate the environment. "
] | 1,594 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bart):**facebook/bart-large-cnn**
Language I am using the model on English
The problem arises when using:
* [x] the official example scripts: summarization
## To reproduce
Steps to reproduce the behavior:
run the code below
```python
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
load the model
<!-- A clear and concise description of what you would expect to happen. -->
## Error infomation
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-5-3a96f15c5285> in <module>
----> 1 tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn")
~/.conda/envs/hug/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs)
1138
1139 """
-> 1140 return cls._from_pretrained(*inputs, **kwargs)
1141
1142 @classmethod
~/.conda/envs/hug/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1244 ", ".join(s3_models),
1245 pretrained_model_name_or_path,
-> 1246 list(cls.vocab_files_names.values()),
1247 )
1248 )
OSError: Model name 'facebook/bart-large-cnn' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'facebook/bart-large-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: centos
- Python version: python 3.7.0
- PyTorch version (GPU?): pytorch 1.5.0 cpu_py37hd91cbb3_0
- Tensorflow version (GPU?): tensorflow-gpu 2.0.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5579/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5578/comments | https://api.github.com/repos/huggingface/transformers/issues/5578/events | https://github.com/huggingface/transformers/pull/5578 | 652,424,215 | MDExOlB1bGxSZXF1ZXN0NDQ1NDk0NjY1 | 5,578 | [Reformer] - Cache hidden states and buckets to speed up inference | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=h1) Report\n> Merging [#5578](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `98.47%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5578 +/- ##\n==========================================\n- Coverage 77.32% 77.21% -0.11% \n==========================================\n Files 146 146 \n Lines 26047 26198 +151 \n==========================================\n+ Hits 20141 20230 +89 \n- Misses 5906 5968 +62 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.49% <ø> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `90.56% <98.47%> (+2.70%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=footer). Last update [8ab565a...7cd6a8f](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik @sgugger - when you guys are back, feel free to merge to PR or we can wait until I'm back - not too urgent this PR. LGTM to merge.",
"Okey merging this. \r\n\r\n@sgugger @LysandreJik @sshleifer -> checked caching extensively and added multiple tests for equality. \r\nWould be nice if you can tag me for possible future issues related to this caching mechanism. "
] | 1,594 | 1,595 | 1,594 | MEMBER | null | As discussed with the authors in Reformer we cache the hidden states and buckets to speed up inference for language generation. Caching only the hidden states can save at least twice the memory versus caching the key and value output vectors (often more since the `hidden_size` can be smaller than key and query projections).
The idea is to only recompute key and value projections within the same chunk so that the output will be equal.
- [x] Implement caching mechanism for Local Attention
- [x] Implement caching mechanism for LSH Attention
- [x] Add test
- [x] Add logic to generation
- [x] Refactoring and better naming
=> This results in a 10x speed up when generating up to 1000 tokens.
**Some fun trials trying to make Reformer generate 16000 tokens for its own Wikipedia article [here](https://colab.research.google.com/drive/1Oao8vBtDkz6v1E1efUhTlug5uuxC_BnE?usp=sharing).**
*Review*:
Not urgent. Merging this PR can wait a bit. Added a couple of tests to make sure caching gives same output as non-caching. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5578/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5578/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5578",
"html_url": "https://github.com/huggingface/transformers/pull/5578",
"diff_url": "https://github.com/huggingface/transformers/pull/5578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5578.patch",
"merged_at": 1594995463000
} |
https://api.github.com/repos/huggingface/transformers/issues/5577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5577/comments | https://api.github.com/repos/huggingface/transformers/issues/5577/events | https://github.com/huggingface/transformers/pull/5577 | 652,363,208 | MDExOlB1bGxSZXF1ZXN0NDQ1NDQ1MDg0 | 5,577 | Fix tokenizers pretrained saving/loading | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @n1t0, great job, saving works now! :)\r\nDo you think `AddedToken` instances should be serialized with additional meta data? \r\n\r\nI managed to tweak JSON (de)serialization with custom hooks to process objects properly but it is difficult to infer the deserializing type from a single set of attributes, passing `__class__` meta data helps to make guesses at what types to deserialize. ",
"Superseded by #6026"
] | 1,594 | 1,651 | 1,597 | MEMBER | null | Fix #5571
I managed to fix the saving part, but I don't really know what to do for the loading part (in `_from_pretrained`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5577/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5577/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5577",
"html_url": "https://github.com/huggingface/transformers/pull/5577",
"diff_url": "https://github.com/huggingface/transformers/pull/5577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5577.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5576/comments | https://api.github.com/repos/huggingface/transformers/issues/5576/events | https://github.com/huggingface/transformers/pull/5576 | 652,355,643 | MDExOlB1bGxSZXF1ZXN0NDQ1NDM5MDE2 | 5,576 | Fix tests imports dpr | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Some tests are failing, please only merge once everything pass.",
"Yep I'm on it",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=h1) Report\n> Merging [#5576](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `0.14%`.\n> The diff coverage is `74.31%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5576 +/- ##\n==========================================\n+ Coverage 76.84% 76.99% +0.14% \n==========================================\n Files 141 145 +4 \n Lines 24685 25049 +364 \n==========================================\n+ Hits 18969 19286 +317 \n- Misses 5716 5763 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.65% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (ø)` | |\n| [src/transformers/data/datasets/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL3NxdWFkLnB5) | `47.56% <47.56%> (ø)` | |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <57.65%> (ø)` | |\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.45% <97.45%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/data/datasets/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=footer). Last update [e49393c...878b09c](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"All green, merging"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | There were changes in the locations of some functions used for tests in #5350.
When #5279 was merged some tests couldn't run.
I fixed the imports of those functions.
When I re-ran the tests I noticed that some didn't pass for the DPRReaderTokenizer so I fixed them.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5576/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5576",
"html_url": "https://github.com/huggingface/transformers/pull/5576",
"diff_url": "https://github.com/huggingface/transformers/pull/5576.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5576.patch",
"merged_at": 1594132513000
} |
https://api.github.com/repos/huggingface/transformers/issues/5575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5575/comments | https://api.github.com/repos/huggingface/transformers/issues/5575/events | https://github.com/huggingface/transformers/issues/5575 | 652,353,048 | MDU6SXNzdWU2NTIzNTMwNDg= | 5,575 | Seperating premise and hypothesis in MNLI | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
}
] | [
"Might be of interest to @joeddav :)",
"I've solved this problem. Thanks a lot @joeddav for even showing interest. You guys are very supportive. I'll post what I did so that if someone is stuck, they can refer.\r\nIn all `SequenceClassification` model, there's a linear layer. So we can straightaway add the loss from both heads. You can choose to decide how to process the logits , for ex. `[u+v, u-v, u*v]` where `u` and `v` are the respective output vector/logits. It was not a good idea to directly deal with raw hidden states from `BertModel` in my case. I'm closing it now.",
"@prajjwal1 Glad you figured it out! FYI we launched a [discussion forum](https://discuss.huggingface.co/) this week (after you opened this issue I think). Questions like this would be well-suited to that forum if you have more to ask or want to help out other people in the community! 😇",
"@joeddav Yeah I have answered a couple of questions there already. I was the one who commented about requesting a forum and posted a link at ACL chat. The forum came into being the next day. Maybe someone was working inside the team while other team members didn't know. But really good to have it.",
"@prajjwal1 Ahhh yess, sorry I didn't make the connection :) Yes, we had been having some discussions about having our own forum and I knew we had some people working on it, but none of us on rocket chat realized it would be released the next day haha"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | # ❓ Questions & Help
I'm adding it here since I didn't receive any reply on SO. I think this query might be relevant for people who are working in few-shot, contrastive/metric learning space.
I'm trying to implement Siamese like transformer architecture. Similar work has been done in SentenceBERT paper. I'm facing an issue. To seperate hypothesis and premise, I modify [this line](https://github.com/huggingface/transformers/blob/3dcb748e31be8c7c9e4f62926c5c144c62d07218/src/transformers/data/processors/glue.py#L131) from `_glue_convert_examples_to_features`. Instead I do
```
batch_encoding_a = tokenizer(
[example.text_a for example in examples],
max_length=max_length,
padding="max_length",
truncation=True,
)
```
Did the same thing for `examples.text_b` to obtain `batch_encoding_b`. Then I modify the GlueDataset by modifying [this line mainly](https://github.com/huggingface/transformers/blob/3dcb748e31be8c7c9e4f62926c5c144c62d07218/src/transformers/data/datasets/glue.py#L125), since this will return two items now (segregated "hypothesis" and premise").
Then `__getitem__` is modified accordingly to return `self.features_a[i], self.features_b[i]`.
That's the gist of how I'm obtaining segregated "hypothesis" and "premise". These are then passed to two BERTs (or one BERT if its weights are kept frozen).
This is how I've defined the `collate_fn`
```
def siamese_data_collator(batch):
features_a, features_b = [], []
for item in batch:
for k, v in item.items():
if k == "a":
features_a.append(v)
else:
features_b.append(v)
return {
"a": default_data_collator(features_a),
"b": default_data_collator(features_b),
}
```
Then the `dataloader` is created in the usual way. So when we iterate like this:
```
def _training_step(...):
model.train()
for k, v in inputs["a"].items():
if isinstance(v, torch.Tensor):
inputs["a"][k] = v.to(self.args.device)
# we get inputs['a'] and inputs['b'] which is passed to the model
```
I had to modify `_training_step` and `evaluate` accordingly in the `Trainer` class.
Now the problem is, the model doesn't learn at all (`bert-base-uncased`). I tried with using my `model` and modified `Trainer` with standard `GlueDataset`, and it works. This leads to the conclusion that something is off with the data. The model should learn something (even if is not being fed concatenated "hypothesis" and "premise").
The model basically has one BERT and one linear layer. The logits come from linear layer which is then used to compute loss function (typical siamese like architecture).
Can you suggest if there's an issue in how the `Dataset` is being created here, or propose something of your own to segregate "hypothesis" and "premise" so that they can be fed separately to BERT.
Link to [Stack Overflow question](https://stackoverflow.com/questions/62771502/seperating-premise-and-hypothesis-in-mnli) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5575/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5574/comments | https://api.github.com/repos/huggingface/transformers/issues/5574/events | https://github.com/huggingface/transformers/pull/5574 | 652,349,182 | MDExOlB1bGxSZXF1ZXN0NDQ1NDMzODk1 | 5,574 | Create README.md for electra-base-squad2 | {
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thank you!"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Readme file for deepset/electra-base-squad2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5574/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5574",
"html_url": "https://github.com/huggingface/transformers/pull/5574",
"diff_url": "https://github.com/huggingface/transformers/pull/5574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5574.patch",
"merged_at": 1594395585000
} |
https://api.github.com/repos/huggingface/transformers/issues/5573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5573/comments | https://api.github.com/repos/huggingface/transformers/issues/5573/events | https://github.com/huggingface/transformers/issues/5573 | 652,338,186 | MDU6SXNzdWU2NTIzMzgxODY= | 5,573 | MBARTTokenizer set_lang logic will only work for src_lang=en_XX | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5573/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/5572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5572/comments | https://api.github.com/repos/huggingface/transformers/issues/5572/events | https://github.com/huggingface/transformers/pull/5572 | 652,281,015 | MDExOlB1bGxSZXF1ZXN0NDQ1Mzc5Mjky | 5,572 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=h1) Report\n> Merging [#5572](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5572 +/- ##\n==========================================\n+ Coverage 76.84% 77.07% +0.23% \n==========================================\n Files 141 141 \n Lines 24685 24685 \n==========================================\n+ Hits 18969 19027 +58 \n+ Misses 5716 5658 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=footer). Last update [d2a9399...e5c8277](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5572/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5572",
"html_url": "https://github.com/huggingface/transformers/pull/5572",
"diff_url": "https://github.com/huggingface/transformers/pull/5572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5572.patch",
"merged_at": 1594395769000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5571/comments | https://api.github.com/repos/huggingface/transformers/issues/5571/events | https://github.com/huggingface/transformers/issues/5571 | 652,172,411 | MDU6SXNzdWU2NTIxNzI0MTE= | 5,571 | Tokenizers save_pretrained doesn't work with custom vocabs (v3.0.2) | {
"login": "mozharovsky",
"id": 6762769,
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozharovsky",
"html_url": "https://github.com/mozharovsky",
"followers_url": "https://api.github.com/users/mozharovsky/followers",
"following_url": "https://api.github.com/users/mozharovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions",
"organizations_url": "https://api.github.com/users/mozharovsky/orgs",
"repos_url": "https://api.github.com/users/mozharovsky/repos",
"events_url": "https://api.github.com/users/mozharovsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/mozharovsky/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
}
] | [
"Fixed by #6026",
"> Fixed by #6026\r\n\r\nI am still experiencing this issue when running transformers 3.1\r\n\r\nIf you load from pretrained as follows:\r\n```py\r\ntokenizer = RobertaTokenizer.from_pretrained(\"path/to/tokenizer/folder\")\r\n```\r\nthen the tokenizer init_kwargs will load appropriately. The init_kwargs takes the form of a dictionary as follows:\r\n```py\r\n{\r\nmerges_file: \"file/path/...\",\r\nmodel_max_length: 512,\r\nvocab_file: \"file/path/...\",\r\n}\r\n```\r\nIn such a scenario the tokenizer can be saved using the save_pretrained functionality as intended.\r\n\r\nHowever, when defining the tokenizer using the vocab_file and merge_file arguments, as follows:\r\n```py\r\ntokenizer = RobertaTokenizer(vocab_file='file/path/vocab.json', merges_file='file_path/merges.txt')\r\n```\r\nthe resulting init_kwargs appears to default to:\r\n```py\r\n{\r\nbos_token: AddedToken(bos_token, lstrip=False, rstrip=False) \r\neos_token = AddedToken(eos_token, lstrip=False, rstrip=False)\r\nsep_token = AddedToken(sep_token, lstrip=False, rstrip=False)\r\ncls_token = AddedToken(cls_token, lstrip=False, rstrip=False)\r\nunk_token = AddedToken(unk_token, lstrip=False, rstrip=False) \r\npad_token = AddedToken(pad_token, lstrip=False, rstrip=False) \r\n}\r\n```\r\nwhich i see is defined within the RobertaTokenizer class of tokenization_roberta, but should be assigned along with the vocab_file and merges_file which does not appear to be the case.\r\n\r\nThis means that within the save_pretrained() function, the lines that are causing the issue are:\r\n```py\r\ntokenizer_config = copy.deepcopy(self.init_kwargs)\r\nif len(self.init_inputs) > 0:\r\n tokenizer_config[\"init_inputs\"] = copy.deepcopy(self.init_inputs)\r\nfor file_id in self.vocab_files_names.keys():\r\n tokenizer_config.pop(file_id, None)\r\n\r\nwith open(tokenizer_config_file, \"w\", encoding=\"utf-8\") as f:\r\n f.write(json.dumps(tokenizer_config, ensure_ascii=False))\r\n\r\n```\r\nThe solution #6026 looks like it addresses a separate part of the save_pretrained function, and hasn't stopped this error being raised when I run the above scenario? I am running transformers = 3.1, with tokenizers = 0.8.1rc2",
"Hello! If you check the init method of the `RobertaTokenizer`, you'll see it does not only expect the `vocab_file` and `merges_file`, but it also accepts the special tokens you mention:\r\n\r\n```py\r\n def __init__(\r\n self,\r\n vocab_file,\r\n merges_file,\r\n errors=\"replace\",\r\n bos_token=\"<s>\",\r\n eos_token=\"</s>\",\r\n sep_token=\"</s>\",\r\n cls_token=\"<s>\",\r\n unk_token=\"<unk>\",\r\n pad_token=\"<pad>\",\r\n mask_token=\"<mask>\",\r\n add_prefix_space=False,\r\n **kwargs\r\n ):\r\n```\r\n\r\nThis should allow you to specify which special tokens should be used.\r\n\r\nIf this does not solve your issue, do you mind opening a new issue with your specific problem, including the code that raises the error and the full stack-trace? This will help us help you. Thank you!",
"Hi @LysandreJik , thanks for the quick reply. Unfortunately defining the other arguments didn't appear to solve the issue. I have open a new issue: #8306 which includes code to recreate the problem"
] | 1,594 | 1,604 | 1,597 | NONE | null | # 🐛 Bug
## Information
I'm using an instance of `RobertaTokenizerFast` with my custom vocab (pretrained using `tokenizers` library). When I try to save the tokenizer using `tokenizer.save_pretrained(<path>)` method a type error occurs.
I did a bit of investigation why this happens and got a workaround https://github.com/huggingface/transformers/issues/5393#issuecomment-654342933
## To reproduce
Steps to reproduce the behavior:
1. Choose any tokenizer that adds `AddedToken` to kwargs on init (e.g. `RobertaTokenizerFast` which adds a `mask_token` in the following manner)
2. Train a custom vocab using `tokenizers` library or download a pretrained vocab
3. Create an instance of chosen tokenizer by passing vocab and merges files to its constructor
4. Call `save_pretrained(<path>)` method
```python
tokenizer_kwargs = RobertaTokenizerFast.from_pretrained("roberta-base").init_kwargs
vocab_file = tokenizer_kwargs.get("vocab_file")
merges_file = tokenizer_kwargs.get("merges_file")
tokenizer = RobertaTokenizerFast(vocab_file, merges_file)
tokenizer.save_pretrained(".")
```
This will result in the following error:
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
----> 1 tokenizer.save_pretrained(".")
~/app/.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in save_pretrained(self, save_directory)
1360
1361 with open(tokenizer_config_file, "w", encoding="utf-8") as f:
-> 1362 f.write(json.dumps(tokenizer_config, ensure_ascii=False))
1363
1364 with open(special_tokens_map_file, "w", encoding="utf-8") as f:
~/.pyenv/versions/3.7.6/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
239
240
~/.pyenv/versions/3.7.6/lib/python3.7/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
~/.pyenv/versions/3.7.6/lib/python3.7/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
~/.pyenv/versions/3.7.6/lib/python3.7/json/encoder.py in default(self, o)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
181
TypeError: Object of type AddedToken is not JSON serializable
```
## Expected behavior
I expect the same behavior as if you would save a pretrained tokenizer from the hub:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
tokenizer.save_pretrained(".")
```
Which in its turn results in producing 4 files:
```
('/app/vocab.json',
'/app/merges.txt',
'/app/special_tokens_map.json',
'/app/added_tokens.json')
```
## Possible solution
A possible approach would be to pass hooks to json methods for (de)serializing objects of complex type like `AddedToken`. Generally, a solution must preserve the type information but since types are a subject of change it's unclear whether such generalization needs to be followed at all.
This is a sketch for the solution described above.
```python
def deserialize_json_object(json_obj: Dict[Text, Any]) -> Any:
classname = json_obj.get("__class__", "dict")
try:
obj = eval(classname)(**json_obj)
except NameError:
obj = json_obj
return obj
def serialize_object_to_json(obj: Any) -> Dict[Text, Any]:
get_state = getattr(obj, "__getstate__", None)
if callable(get_state):
json_obj = get_state()
json_obj["__class__"] = type(obj).__name__
else:
json_obj = obj.__dict__
return json_obj
json.load(<obj>, object_hook=deserialize_json_object)
json.dumps(<obj>, default=serialize_object_to_json)
```
Another solution would be to map `AddedToken` instances to dicts in `tokenizer_config` similar to constructing a `write_dict` object here:
https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1368
This will require to deserialize dicts into `AddedToken` instances on loading vocabs and symbols.
## Environment info
- `transformers` version: 3.0.2
- Platform: macOS Catalina 10.15.5
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (No)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5571/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5570/comments | https://api.github.com/repos/huggingface/transformers/issues/5570/events | https://github.com/huggingface/transformers/issues/5570 | 652,145,624 | MDU6SXNzdWU2NTIxNDU2MjQ= | 5,570 | Freeze the token embeddings for finetuning | {
"login": "santhoshkolloju",
"id": 4193817,
"node_id": "MDQ6VXNlcjQxOTM4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4193817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santhoshkolloju",
"html_url": "https://github.com/santhoshkolloju",
"followers_url": "https://api.github.com/users/santhoshkolloju/followers",
"following_url": "https://api.github.com/users/santhoshkolloju/following{/other_user}",
"gists_url": "https://api.github.com/users/santhoshkolloju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santhoshkolloju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santhoshkolloju/subscriptions",
"organizations_url": "https://api.github.com/users/santhoshkolloju/orgs",
"repos_url": "https://api.github.com/users/santhoshkolloju/repos",
"events_url": "https://api.github.com/users/santhoshkolloju/events{/privacy}",
"received_events_url": "https://api.github.com/users/santhoshkolloju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"freezing embedding weights will result in faster training and will also consume less GPU memory. It might result in slightly less performance but I guess it ultimately depends on the task, the dataset etc.",
"A tangent question: is it possible to freeze only a subset of the token embedding weights? Background: I am adding some new tokens and I want the model to only fine tune on those.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"It would be a very interesting feature proposed by @AzharSultan like for ***prompt learning*** purpose. \r\nPyTorch doesn't allow to only update a part of the embeddings lookup table. A solution would be either masking gradients or creating a new lookup table just for the added tokens. Any plan to add that ? ",
"@montellasebastien did you experiment with these workarounds?",
"I am also interested in @AzharSultan's question. What is the recommended for doing this right now? "
] | 1,594 | 1,658 | 1,603 | NONE | null | Hi ,
I have finetuned the T5 model using the community notebooks given . But when i looked into the code under examples/seq2seq finetuning code token embedding weights are frozen.
Can some one throw some light on how does this effect the finetuning process?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5570/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5569/comments | https://api.github.com/repos/huggingface/transformers/issues/5569/events | https://github.com/huggingface/transformers/issues/5569 | 652,111,700 | MDU6SXNzdWU2NTIxMTE3MDA= | 5,569 | BertEmbeddings code for position_embeddings and word_embeddings | {
"login": "shampp",
"id": 55344772,
"node_id": "MDQ6VXNlcjU1MzQ0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/55344772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shampp",
"html_url": "https://github.com/shampp",
"followers_url": "https://api.github.com/users/shampp/followers",
"following_url": "https://api.github.com/users/shampp/following{/other_user}",
"gists_url": "https://api.github.com/users/shampp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shampp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shampp/subscriptions",
"organizations_url": "https://api.github.com/users/shampp/orgs",
"repos_url": "https://api.github.com/users/shampp/repos",
"events_url": "https://api.github.com/users/shampp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shampp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, BERT uses absolute positional embeddings, not based on sinusoidal functions. The positional embeddings are learned. \r\n\r\nYou can check the [XLNet code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L704-L712) or the [TransfoXL code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L170-L186) for examples showcasing sinusoidal (relative) embeddings."
] | 1,594 | 1,594 | 1,594 | NONE | null | I am trying to figure out how exactly positions and words are embedded in Bert. I checked the code, and it seems following code is used for the purpose. This implies that both the embedding function are calling same `__init__' function from the class `nn.Embedding`. Where exactly is the code doing the actual position embedding (sine and cosine based). Where can I see that code ?.
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
inputs_embeds = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5568/comments | https://api.github.com/repos/huggingface/transformers/issues/5568/events | https://github.com/huggingface/transformers/issues/5568 | 652,110,887 | MDU6SXNzdWU2NTIxMTA4ODc= | 5,568 | Use pretrained bert withou embedding layers. | {
"login": "demdecuong",
"id": 32518096,
"node_id": "MDQ6VXNlcjMyNTE4MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32518096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demdecuong",
"html_url": "https://github.com/demdecuong",
"followers_url": "https://api.github.com/users/demdecuong/followers",
"following_url": "https://api.github.com/users/demdecuong/following{/other_user}",
"gists_url": "https://api.github.com/users/demdecuong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/demdecuong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demdecuong/subscriptions",
"organizations_url": "https://api.github.com/users/demdecuong/orgs",
"repos_url": "https://api.github.com/users/demdecuong/repos",
"events_url": "https://api.github.com/users/demdecuong/events{/privacy}",
"received_events_url": "https://api.github.com/users/demdecuong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,594 | 1,614 | 1,614 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
https://arxiv.org/pdf/2005.07421.pdf
I was inspirited by the above paper. But the author set the input of bert is there soft-masked embeddings which is not the original format of bert.
So are there any way for me to get Bert without its embedding layers.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5568/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5567/comments | https://api.github.com/repos/huggingface/transformers/issues/5567/events | https://github.com/huggingface/transformers/issues/5567 | 652,018,759 | MDU6SXNzdWU2NTIwMTg3NTk= | 5,567 | How to get gradient wrt to a word embedding layer pytorch? | {
"login": "zeyuyun1",
"id": 43428393,
"node_id": "MDQ6VXNlcjQzNDI4Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zeyuyun1",
"html_url": "https://github.com/zeyuyun1",
"followers_url": "https://api.github.com/users/zeyuyun1/followers",
"following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}",
"gists_url": "https://api.github.com/users/zeyuyun1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zeyuyun1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeyuyun1/subscriptions",
"organizations_url": "https://api.github.com/users/zeyuyun1/orgs",
"repos_url": "https://api.github.com/users/zeyuyun1/repos",
"events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}",
"received_events_url": "https://api.github.com/users/zeyuyun1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the `word_embeddings` variable is not returned so you won't obtain these except if you modify the file.",
"Thanks!"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->'
The idea behind this is I want to try some old school gradient ascent style visualization with Bert Model.
I want to know the effect of changes of embedding layer on a specific layer's specific dimension. Thus, I took the gradient of the output of a specific layer's specific dimension wrt the first word embedding layer's output.
The best thing I can do here is the following:
```
from transformers import BertTokenizer, BertModel
model = BertModel.from_pretrained('bert-base-uncased', output_attentions=True,output_hidden_states=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=True)
s = 'I want to sleep'
inputs = tokenizer.encode_plus(s,return_tensors='pt', add_special_tokens=False,is_pretokenized=True)
input_ids = inputs['input_ids']
output = model(input_ids)
hidden_states = output[-2]
X = hidden_states[0] #embedding space, shape: [1,4,768] (batch_size,sentence_length,embedding dimension)
y = hidden_states[3][0][0][0] ##the 0th position and 0th dimension of output of 3rd hidden layer. Dimension should just be [1], a scalar.
torch.autograd.grad(y,X,retain_graph=True, create_graph=True) #I take the gradient of y wrt. Since y is scalar. The dimension of the gradient is just the dimension of X.
```
This is, however, not good enough. I want the gradient wrt the actual word embedding layer. However, Transformer's embedding contains "position_embedding" and "token_type_embedding". Here's the code for the first layer embedding:
```
class BertEmbeddings(nn.Module):
"""Construct the embeddings from word, position and token_type embeddings.
"""
def __init__(self, config):
super(BertEmbeddings, self).__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, input_ids, token_type_ids=None, position_ids=None):
seq_length = input_ids.size(1)
if position_ids is None:
position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
if token_type_ids is None:
token_type_ids = torch.zeros_like(input_ids)
words_embeddings = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = words_embeddings + position_embeddings + token_type_embeddings
embeddings = self.LayerNorm(embeddings)
embeddings = self.dropout(embeddings)
return embeddings
```
Ideally, I want the gradient wrt JUST “words_embeddings" Rather than, wrt "words_embeddings + position_embeddings + token_type_embeddings" and follows by layerNorm and dropout.
I think I can do this by modifying changing the model. Is there a way to it without changing the model?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62750798/how-to-get-gradient-wrt-to-a-specific-layers-output-pytorch
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5567/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5567/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5566/comments | https://api.github.com/repos/huggingface/transformers/issues/5566/events | https://github.com/huggingface/transformers/pull/5566 | 652,004,116 | MDExOlB1bGxSZXF1ZXN0NDQ1MTU1MTMw | 5,566 | [docs] fix model_doc links in model summary | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=h1) Report\n> Merging [#5566](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d2332861f5225aef17bd7e75abc670a72239081&el=desc) will **increase** coverage by `0.35%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5566 +/- ##\n==========================================\n+ Coverage 77.20% 77.55% +0.35% \n==========================================\n Files 141 141 \n Lines 24638 24638 \n==========================================\n+ Hits 19021 19108 +87 \n+ Misses 5617 5530 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.04% <0.00%> (+49.56%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=footer). Last update [1d23328...028e84a](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"No, this won't work across versions (if you switch to master docs for instance). The problem is in the first slash, I think removing it should be enough to make all link works.",
"Double-checked locally, this is the right fix: `/model_doc/gpt` -> `model_doc/gpt`\r\n ",
"should I make these changes ? also model_doc/gpt or model_doc/gpt.html ?",
"No need for the .html.\r\nI can force-push on your branch or let you do the changes, your call!",
"How much time does it take to these changes be reflected in the website?\r\n\r\nAlso, can I close my issue after this fix?\r\n\r\n@sgugger \r\n\r\nThanks! :)",
"The thing is that the bug will probably stay in the stable version of the doc forever (unless we manage to cherry-pick it somehow @LysandreJik ?). It is reflected in the [master version](https://huggingface.co/transformers/master/model_summary.html) already.\r\n\r\nYou can close your issue whenever you like :-)"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | Possible fix for #5561
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5566/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5566",
"html_url": "https://github.com/huggingface/transformers/pull/5566",
"diff_url": "https://github.com/huggingface/transformers/pull/5566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5566.patch",
"merged_at": 1594134373000
} |
https://api.github.com/repos/huggingface/transformers/issues/5565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5565/comments | https://api.github.com/repos/huggingface/transformers/issues/5565/events | https://github.com/huggingface/transformers/issues/5565 | 651,997,744 | MDU6SXNzdWU2NTE5OTc3NDQ= | 5,565 | ❓ Why multiplying the output of T5 by some scalar before LM head ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think this would be a great question for our new forum, with a title like \"T5 architecture\": https://discuss.huggingface.co/ - would you mind posting it there? This seems like an interesting question people would probably like to see there.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | # ❓ Questions & Help
I'm wondering why multiplying the outputs of T5 by some scalar before inputting in the LM head :
https://github.com/huggingface/transformers/blob/1d2332861f5225aef17bd7e75abc670a72239081/src/transformers/modeling_tf_t5.py#L1147
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5564/comments | https://api.github.com/repos/huggingface/transformers/issues/5564/events | https://github.com/huggingface/transformers/issues/5564 | 651,961,048 | MDU6SXNzdWU2NTE5NjEwNDg= | 5,564 | Where is the documentation on migrating to the 3.0 tokenizer API? | {
"login": "githubrandomuser2017",
"id": 25097908,
"node_id": "MDQ6VXNlcjI1MDk3OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/25097908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubrandomuser2017",
"html_url": "https://github.com/githubrandomuser2017",
"followers_url": "https://api.github.com/users/githubrandomuser2017/followers",
"following_url": "https://api.github.com/users/githubrandomuser2017/following{/other_user}",
"gists_url": "https://api.github.com/users/githubrandomuser2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubrandomuser2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubrandomuser2017/subscriptions",
"organizations_url": "https://api.github.com/users/githubrandomuser2017/orgs",
"repos_url": "https://api.github.com/users/githubrandomuser2017/repos",
"events_url": "https://api.github.com/users/githubrandomuser2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubrandomuser2017/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @githubrandomuser2017 , AFAIK these methods will still work in v.3.0 as backward compatibility is maintained. This can help\r\nhttps://huggingface.co/transformers/preprocessing.html",
"@patil-suraj The page you mentioned (https://huggingface.co/transformers/preprocessing.html) doesn't mention anything about the missing functions (`encode_plus()` and `batch_encode_plus()`). Also, the functions are not in the tokenizer documentation page anymore (https://huggingface.co/transformers/main_classes/tokenizer.html). I need to look up some arguments.",
"for me `encode_plus()` and `batch_encode_plus()` are working as expected in v3.",
"Also as far as I understand \r\n`tokenizer` object is now a callable and by default it behaves as encode_plus, i.e it returns `input_ids` along with `attention_mask`, `token_type_ids` etc, this can be controlled using `return_attention_mask`, `return_token_type_ids` arguments.\r\n\r\nSo by default if you provide a single example to `tokenizer` it will behave as encode_plus and if you provide a batch of examples it'll behave like `batch_encode_plus`. \r\n\r\npadding and truncation is now controlled using `padding` and `truncation` arguments.\r\n\r\nSo \r\n```python3\r\ntokenizer.encode_plus(\"some text\", max_length=512, pad_to_max_length=True)\r\n``` \r\nis now equivalent to \r\n```python3\r\ntokenizer(\"some text\", max_length=512, padding=\"max_length\", truncation=True)\r\n````\r\n\r\nand\r\n```python3\r\ntokenizer.batch_encode_plus([\"some text\", \"some other text\"], max_length=512, pad_to_max_length=True)\r\n``` \r\nis equivalent to \r\n```python3\r\ntokenizer([\"some text\", \"some other text\"], max_length=512, padding=\"max_length\", truncation=True)\r\n````\r\n\r\nhttps://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__",
"Indeed, @patil-suraj is right, `__call__` is a wrapper on top of both `encode_plus` and `batch_encode_plus` which will dispatch depending on whether you are providing a single example or a batch.\r\n\r\nWe don't promote the use of `encode_plus` and `batch_encode_plus` but we kept backward compatibility for them so they are still available and behave identically as in transformers version `2.X`.\r\n\r\nIf you want to access the doc for them, use the doc for version `2.X`: https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html?highlight=batch_encode_plus#transformers.PreTrainedTokenizer.batch_encode_plus",
"But we are right we should add a migration guide, in particular for the padding and truncation commands. We can use the detailed migration guide in the description of this PR as a basis https://github.com/huggingface/transformers/pull/4510.\r\n\r\n@sgugger do you want to give it a look when you have some bandwidth?\r\n",
"Will try do something before the end of the week.",
"I took a stab at it and added it to our brand new forum, look [here](https://discuss.huggingface.co/t/migration-guide-from-v2-x-to-v3-x-for-the-tokenizer-api/55).",
"Wonderful! Thank you. \r\n\r\nFor the forum, do I use my Github credentials?",
"No, you need to create an account on https://huggingface.co/ if you don't have one already (same as for uploading models to the hub).",
"Thank you. Can you please put that fact (creating a huggingface.co account) on the main forum webpage? https://discuss.huggingface.co/"
] | 1,594 | 1,594 | 1,594 | NONE | null | I see that you folks have completely changed the API to do tokenizing, e.g. for BertTokenizer. I have a lot of code using the two methods `encode_plus()` and `batch_encode_plus()`, and when I went to the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html) to look up an argument, I found that these methods are completely gone. All that remains is a little blurb saying:
> `BatchEncoding` holds the output of the tokenizer’s encoding methods (`__call__`, `encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary.
Are these two methods deprecated now? Did you post a migration guide for users?
On the main [Huggingface Transformers page](https://github.com/huggingface/transformers), you have sections for `Migrating from pytorch-transformers to transformers` and `Migrating from pytorch-pretrained-bert to transformers`, so it's not like there's no precedent for you to provide some information to users on major API changes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5564/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5563/comments | https://api.github.com/repos/huggingface/transformers/issues/5563/events | https://github.com/huggingface/transformers/issues/5563 | 651,916,231 | MDU6SXNzdWU2NTE5MTYyMzE= | 5,563 | Bug in Question Answering pipeline when question is weird (unanswerable) | {
"login": "pavanchhatpar",
"id": 16511756,
"node_id": "MDQ6VXNlcjE2NTExNzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16511756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavanchhatpar",
"html_url": "https://github.com/pavanchhatpar",
"followers_url": "https://api.github.com/users/pavanchhatpar/followers",
"following_url": "https://api.github.com/users/pavanchhatpar/following{/other_user}",
"gists_url": "https://api.github.com/users/pavanchhatpar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavanchhatpar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavanchhatpar/subscriptions",
"organizations_url": "https://api.github.com/users/pavanchhatpar/orgs",
"repos_url": "https://api.github.com/users/pavanchhatpar/repos",
"events_url": "https://api.github.com/users/pavanchhatpar/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavanchhatpar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe https://github.com/huggingface/transformers/pull/5542 is trying to fix exactly this.",
"Yes, I see. The pipeline failing on that pull request shows the same error.\r\nBut, I think the fix would lie somewhere in squad feature creation because the `feature.token_to_orig_map` is what gives the `KeyError`. Took a diff between `v2.11.0` and `v3.0.2` for `pipelines.py` and here's what changed for the `QuestionAnsweringPipeline`\r\n```diff\r\n class QuestionAnsweringArgumentHandler(ArgumentHandler):\r\n@@ -1165,12 +1253,12 @@ class QuestionAnsweringPipeline(Pipeline):\r\n examples = self._args_parser(*args, **kwargs)\r\n features_list = [\r\n squad_convert_examples_to_features(\r\n- [example],\r\n- self.tokenizer,\r\n- kwargs[\"max_seq_len\"],\r\n- kwargs[\"doc_stride\"],\r\n- kwargs[\"max_question_len\"],\r\n- False,\r\n+ examples=[example],\r\n+ tokenizer=self.tokenizer,\r\n+ max_seq_length=kwargs[\"max_seq_len\"],\r\n+ doc_stride=kwargs[\"doc_stride\"],\r\n+ max_query_length=kwargs[\"max_question_len\"],\r\n+ is_training=False,\r\n tqdm_enabled=False,\r\n )\r\n for example in examples\r\n@@ -1184,33 +1272,34 @@ class QuestionAnsweringPipeline(Pipeline):\r\n with self.device_placement():\r\n if self.framework == \"tf\":\r\n fw_args = {k: tf.constant(v) for (k, v) in fw_args.items()}\r\n- start, end = self.model(fw_args)\r\n+ start, end = self.model(fw_args)[:2]\r\n start, end = start.numpy(), end.numpy()\r\n else:\r\n with torch.no_grad():\r\n # Retrieve the score for the context tokens only (removing question tokens)\r\n fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()}\r\n- start, end = self.model(**fw_args)\r\n+ start, end = self.model(**fw_args)[:2]\r\n start, end = start.cpu().numpy(), end.cpu().numpy()\r\n \r\n min_null_score = 1000000 # large and positive\r\n answers = []\r\n for (feature, start_, end_) in zip(features, start, end):\r\n- # Normalize logits and spans to retrieve the answer\r\n- start_ = np.exp(start_) / np.sum(np.exp(start_))\r\n- end_ = np.exp(end_) / np.sum(np.exp(end_))\r\n-\r\n # Mask padding and question\r\n start_, end_ = (\r\n start_ * np.abs(np.array(feature.p_mask) - 1),\r\n end_ * np.abs(np.array(feature.p_mask) - 1),\r\n )\r\n \r\n+ # Mask CLS\r\n+ start_[0] = end_[0] = 0\r\n+\r\n+ # Normalize logits and spans to retrieve the answer\r\n+ start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))\r\n+ end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))\r\n+\r\n if kwargs[\"handle_impossible_answer\"]:\r\n min_null_score = min(min_null_score, (start_[0] * end_[0]).item())\r\n \r\n- start_[0] = end_[0] = 0\r\n-\r\n starts, ends, scores = self.decode(start_, end_, kwargs[\"topk\"], kwargs[\"max_answer_len\"])\r\n char_to_word = np.array(example.char_to_word_offset)\r\n```\r\nIt doesn't seem like a lot of logic changed in this file except the position of `# Mask CLS` block and the pull request which fixes its position is not able to fix this bug yet.",
"@mfuntowicz, maybe you have some insights on this as you're working on the PR? :)",
"@pavanchhatpar Can you try on the master branch? I just pushed a change that should fix the case when questions are not answerable see b716a864f869ddc78c3c8eb00729fc9546c74ee4. \r\n\r\nLet us know if it resolve the issue 🙏 ",
"@mfuntowicz I tried with the master branch. Seems to work well now. Thanks for the fix!"
] | 1,594 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): TFDistilBertForQuestionAnswering
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
- Code
```python
from transformers import pipeline
qanlp = pipeline("question-answering", framework="tf") # even PyTorch gives the error
qanlp(context="I am a company", question="When is the bill due?", handle_impossible_answer=True) # happens even without handle_impossible_answer
```
- Error
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-4da7a3b5ca0e> in <module>()
----> 1 qanlp(context="I am a company", question="When is the bill due?", handle_impossible_answer=True)
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
KeyError: 0
```
- Happens even on this hosted inference API https://huggingface.co/distilbert-base-cased-distilled-squad?text=When+is+the+bill+due%3F&context=I+am+a+company%0A
## Expected behavior
Either give a wrong answer or a blank answer without any errors
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Tried with and without GPU in Google colab
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5563/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5562/comments | https://api.github.com/repos/huggingface/transformers/issues/5562/events | https://github.com/huggingface/transformers/pull/5562 | 651,862,502 | MDExOlB1bGxSZXF1ZXN0NDQ1MDM4MDY1 | 5,562 | Fix fast tokenizers too | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5562/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5562",
"html_url": "https://github.com/huggingface/transformers/pull/5562",
"diff_url": "https://github.com/huggingface/transformers/pull/5562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5562.patch",
"merged_at": 1594075501000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5561/comments | https://api.github.com/repos/huggingface/transformers/issues/5561/events | https://github.com/huggingface/transformers/issues/5561 | 651,832,537 | MDU6SXNzdWU2NTE4MzI1Mzc= | 5,561 | [Docs] Incorrect links to models in the Summary of the Models page | {
"login": "mikaelsouza",
"id": 9092284,
"node_id": "MDQ6VXNlcjkwOTIyODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9092284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikaelsouza",
"html_url": "https://github.com/mikaelsouza",
"followers_url": "https://api.github.com/users/mikaelsouza/followers",
"following_url": "https://api.github.com/users/mikaelsouza/following{/other_user}",
"gists_url": "https://api.github.com/users/mikaelsouza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikaelsouza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikaelsouza/subscriptions",
"organizations_url": "https://api.github.com/users/mikaelsouza/orgs",
"repos_url": "https://api.github.com/users/mikaelsouza/repos",
"events_url": "https://api.github.com/users/mikaelsouza/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikaelsouza/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixed in #5566 "
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | ### Issues with the Summary of the Models page
- Description:
- Clicking on any model documentation located in https://huggingface.co/transformers/model_summary.html leads us to an empty page.
- Possible Fix:
- I've noticed that the GPT link is https://huggingface.co/model_doc/gpt while it should be https://huggingface.co/transformers/model_doc/gpt.html
- This fix seems to resolve all other issues in the page.
- Observations:
- I've tried to clone this repo and fix this issue, but for some reason, when I build my own version of the documentation, it doesn't use the `.../transformers/...` path inside the url leading to a different issue from the current online documentation.
- Examples:
<img width="726" alt="Screen Shot 2020-07-06 at 17 28 50" src="https://user-images.githubusercontent.com/9092284/86648750-20481b80-bfaf-11ea-9d3e-b8656d220fdb.png">
EXPECTED:
<img width="1280" alt="Screen Shot 2020-07-06 at 17 29 17" src="https://user-images.githubusercontent.com/9092284/86648771-250ccf80-bfaf-11ea-8552-dd5ab52e36b9.png">
RESULT:
<img width="742" alt="Screen Shot 2020-07-06 at 17 29 01" src="https://user-images.githubusercontent.com/9092284/86648785-28a05680-bfaf-11ea-9578-e16407f58fb4.png">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5561/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5560/comments | https://api.github.com/repos/huggingface/transformers/issues/5560/events | https://github.com/huggingface/transformers/pull/5560 | 651,827,856 | MDExOlB1bGxSZXF1ZXN0NDQ1MDA5NTE2 | 5,560 | [Reformer] Adapt Reformer MaskedLM Attn mask | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=h1) Report\n> Merging [#5560](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.41%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5560 +/- ##\n==========================================\n- Coverage 77.83% 77.42% -0.42% \n==========================================\n Files 141 141 \n Lines 24634 24630 -4 \n==========================================\n- Hits 19175 19069 -106 \n- Misses 5459 5561 +102 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.31% <100.00%> (-0.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=footer). Last update [58cca47...382c309](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Pinging @LysandreJik @thomwolf @flozi00 for notification. "
] | 1,594 | 1,594 | 1,594 | MEMBER | null | The official trax code added an `attention_mask` to the LSH Attention layer a couple of weeks ago: https://github.com/google/trax/commit/94d7d8643d9a6ea38539486dea9ad16da30ec897#diff-a022408d1029c5cbeac49f4589f1b713R1180 . This PR adapts the Hugging Face code to have equal outputs to the official trax code for masked self attention.
Integration tests are adapted and run on branch: `reformer_trax_tests`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5560/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5560",
"html_url": "https://github.com/huggingface/transformers/pull/5560",
"diff_url": "https://github.com/huggingface/transformers/pull/5560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5560.patch",
"merged_at": 1594111687000
} |
https://api.github.com/repos/huggingface/transformers/issues/5559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5559/comments | https://api.github.com/repos/huggingface/transformers/issues/5559/events | https://github.com/huggingface/transformers/pull/5559 | 651,822,368 | MDExOlB1bGxSZXF1ZXN0NDQ1MDA0OTU3 | 5,559 | Fix #5507 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=h1) Report\n> Merging [#5559](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **increase** coverage by `0.36%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5559 +/- ##\n==========================================\n+ Coverage 76.79% 77.16% +0.36% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 18917 19008 +91 \n+ Misses 5717 5626 -91 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.18% <ø> (ø)` | |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <ø> (-19.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `85.75% <0.00%> (-7.85%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.02% <0.00%> (-2.18%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.30% <0.00%> (-1.54%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.01%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=footer). Last update [1bbc28b...3ec1a31](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5559/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5559",
"html_url": "https://github.com/huggingface/transformers/pull/5559",
"diff_url": "https://github.com/huggingface/transformers/pull/5559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5559.patch",
"merged_at": 1594070809000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5558/comments | https://api.github.com/repos/huggingface/transformers/issues/5558/events | https://github.com/huggingface/transformers/pull/5558 | 651,812,335 | MDExOlB1bGxSZXF1ZXN0NDQ0OTk2NTg2 | 5,558 | Various tokenizers fixes | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=h1) Report\n> Merging [#5558](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **increase** coverage by `1.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5558 +/- ##\n==========================================\n+ Coverage 76.79% 77.85% +1.06% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 18917 19180 +263 \n+ Misses 5717 5454 -263 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.32% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+35.82%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=footer). Last update [1bbc28b...531dd46](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | Fix https://github.com/huggingface/transformers/issues/5486
Fix https://github.com/huggingface/transformers/issues/5482
Fix https://github.com/huggingface/transformers/issues/5393
Fix https://github.com/huggingface/transformers/issues/5490 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5558/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5558",
"html_url": "https://github.com/huggingface/transformers/pull/5558",
"diff_url": "https://github.com/huggingface/transformers/pull/5558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5558.patch",
"merged_at": 1594074474000
} |
https://api.github.com/repos/huggingface/transformers/issues/5557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5557/comments | https://api.github.com/repos/huggingface/transformers/issues/5557/events | https://github.com/huggingface/transformers/issues/5557 | 651,753,961 | MDU6SXNzdWU2NTE3NTM5NjE= | 5,557 | Roberta Large doesn't train for sentiment classification | {
"login": "Srj",
"id": 44947896,
"node_id": "MDQ6VXNlcjQ0OTQ3ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/44947896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Srj",
"html_url": "https://github.com/Srj",
"followers_url": "https://api.github.com/users/Srj/followers",
"following_url": "https://api.github.com/users/Srj/following{/other_user}",
"gists_url": "https://api.github.com/users/Srj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Srj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Srj/subscriptions",
"organizations_url": "https://api.github.com/users/Srj/orgs",
"repos_url": "https://api.github.com/users/Srj/repos",
"events_url": "https://api.github.com/users/Srj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Srj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Your learning rate is pretty high and you might be overfitting. Try it again with 3E-5.",
"My question is i have freezed the whole roberta layer. Then why still it needs to be such small lr rather than normal 0.001 as i am training only a classifier head.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | NONE | null | The following is my classifier class. I trained it with appropriate code and same tokenizer(roberta-base). It trained well on SST2 2 class sentiment problem. But when i replaced base model with roberta-large (also the tokenizer and dim of linear layer)my classifier is stuck at 50% accuracy and not matter how many epoch i run it doesn't improve. Is the weight file faulty or am i doing something wrong?
```
class Model(nn.Module):
def __init__(self):
super(Model,self).__init__()
self.Bert = transformers.RobertaModel.from_pretrained('roberta-base')
self.fc0 = nn.Linear(768,512)
self.d0 = nn.Dropout(0.5)
self.d1 = nn.Dropout(0.5)
self.fc1 = nn.Linear(512,1)
nn.init.normal_(self.fc0.weight,std= 0.1)
nn.init.normal_(self.fc0.bias ,0.)
nn.init.normal_(self.fc1.weight,std =0.1)
nn.init.normal_(self.fc1.bias, 0.)
def forward(self,input_ids,attention_mask):
hid= self.Bert(input_ids,attention_mask = attention_mask)
hid= hid[0][:,0,:]
x = self.d0(hid)
x = self.fc0(x)
x = torch.tanh(x)
x = self.d1(x)
x = self.fc1(x)
return x
```
This the optimizer i used and i froze the Roberta Layers and only trained the final layers. I also trained with lr 3e-5. It didn't do any help either.
```
model = Model().to('cuda')
criterion = nn.CrossEntropyLoss(reduction='mean').to('cuda')
for params in model.Bert.parameters():
params.requires_grad = False
optimizer = torch.optim.Adam(model.parameters(),lr=0.001)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5557/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5556/comments | https://api.github.com/repos/huggingface/transformers/issues/5556/events | https://github.com/huggingface/transformers/pull/5556 | 651,747,514 | MDExOlB1bGxSZXF1ZXN0NDQ0OTQzMjMx | 5,556 | [pl examples] add using_native_amp flag to support pl 0.8.4 | {
"login": "amirziai",
"id": 8961464,
"node_id": "MDQ6VXNlcjg5NjE0NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8961464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amirziai",
"html_url": "https://github.com/amirziai",
"followers_url": "https://api.github.com/users/amirziai/followers",
"following_url": "https://api.github.com/users/amirziai/following{/other_user}",
"gists_url": "https://api.github.com/users/amirziai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amirziai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amirziai/subscriptions",
"organizations_url": "https://api.github.com/users/amirziai/orgs",
"repos_url": "https://api.github.com/users/amirziai/repos",
"events_url": "https://api.github.com/users/amirziai/events{/privacy}",
"received_events_url": "https://api.github.com/users/amirziai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What PL version are you on? The unittests should catch this.",
"@sshleifer `pytorch-lightning==0.8.4` which i believe is the latest",
"Got it.\r\nI think you need to run `make style` to get CI to pass.\r\nWe also may need to wait to merge this until https://github.com/huggingface/transformers/pull/5361, which updates us from pl 0.8.1 to 0.8.5. (we are currently on 0.8.1, you can see in `examples/requirements.txt`).\r\n ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=h1) Report\n> Merging [#5556](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d9b872b66f9ab9b7b7c73f2c00985dd92c4121b&el=desc) will **increase** coverage by `0.36%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5556 +/- ##\n==========================================\n+ Coverage 77.00% 77.36% +0.36% \n==========================================\n Files 141 141 \n Lines 24638 24638 \n==========================================\n+ Hits 18973 19062 +89 \n+ Misses 5665 5576 -89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.70% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=footer). Last update [9d9b872...c89d9ca](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think this was fixed by another PR."
] | 1,594 | 1,595 | 1,595 | NONE | null | running the example `finetune_bart_tiny.sh` in `seq2seq` fails with:
```
Traceback (most recent call last):
File "finetune.py", line 344, in <module>
main(args)
File "finetune.py", line 322, in main
logger=logger,
File "/root/notebooks/analysis/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1020, in fit
self.run_pretrain_routine(model)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1156, in run_pretrain_routine
self.train()
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 659, in run_training_batch
grad_norm_dic = self.run_batch_backward_pass(split_batch, batch_idx, opt_idx, optimizer)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 711, in run_batch_backward_pass
self.call_optimizer_step(optimizer, opt_idx, batch_idx, split_batch)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 750, in call_optimizer_step
using_native_amp=native_amp)
TypeError: optimizer_step() got an unexpected keyword argument 'using_native_amp'
```
seems like the issue is with the missing argument. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5556/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5556/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5556",
"html_url": "https://github.com/huggingface/transformers/pull/5556",
"diff_url": "https://github.com/huggingface/transformers/pull/5556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5556.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5555/comments | https://api.github.com/repos/huggingface/transformers/issues/5555/events | https://github.com/huggingface/transformers/issues/5555 | 651,732,664 | MDU6SXNzdWU2NTE3MzI2NjQ= | 5,555 | Training TFBertForSequenceClassification with DataFrame instead of tensorflow_datasets | {
"login": "konstantin-doncov",
"id": 6806786,
"node_id": "MDQ6VXNlcjY4MDY3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6806786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konstantin-doncov",
"html_url": "https://github.com/konstantin-doncov",
"followers_url": "https://api.github.com/users/konstantin-doncov/followers",
"following_url": "https://api.github.com/users/konstantin-doncov/following{/other_user}",
"gists_url": "https://api.github.com/users/konstantin-doncov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/konstantin-doncov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/konstantin-doncov/subscriptions",
"organizations_url": "https://api.github.com/users/konstantin-doncov/orgs",
"repos_url": "https://api.github.com/users/konstantin-doncov/repos",
"events_url": "https://api.github.com/users/konstantin-doncov/events{/privacy}",
"received_events_url": "https://api.github.com/users/konstantin-doncov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also find out that every example which uses transformers with `binary_crossentropy` loss changes the pretrained model(i.e. adds some layers). Is it necessary? If so, then what minimal working example for this? ",
"Now I'm just trying to copy [this Kaggle notebook](https://www.kaggle.com/definedennis/pretrained-bert-with-huggingface-tensorflow-2-1), here is my code:\r\n```\r\ndf = pd.DataFrame({'text': ['SOME ANGRY TEXT!!!', 'Some friendly text :)'], 'label': [1, 0]})\r\n\r\ndef create_model():\r\n bert_model = transformers.TFBertModel.from_pretrained(\"bert-base-cased\")\r\n \r\n input_ids = tf.keras.layers.Input(shape=(10,), dtype=tf.int32, name='input_ids')\r\n token_type_ids = tf.keras.layers.Input((10,), dtype=tf.int32, name='token_type_ids')\r\n attention_mask = tf.keras.layers.Input((10,), dtype=tf.int32, name='attention_mask')\r\n \r\n # Use pooled_output(hidden states of [CLS]) as sentence level embedding\r\n pooled_output = bert_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})[1]\r\n x = tf.keras.layers.Dropout(rate=0.1)(pooled_output)\r\n x = tf.keras.layers.Dense(1, activation='sigmoid')(x)\r\n model = tf.keras.models.Model(inputs={'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}, outputs=x)\r\n return model\r\n\r\nbert_model = create_model()\r\nbert_tokenizer = transformers.BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nx = bert_tokenizer.batch_encode_plus(\r\n df.text.values,\r\n max_length=10,\r\n pad_to_max_length=True, \r\n return_tensors='tf'\r\n)\r\n\r\nbert_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['Accuracy'])\r\n\r\nbert_history = bert_model.fit(\r\n x=x,\r\n y=df.label.values\r\n)\r\n```\r\n\r\nOutput:\r\n```\r\n~/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in __hash__(self)\r\n 724 if (Tensor._USE_EQUALITY and executing_eagerly_outside_functions() and\r\n 725 (g is None or g.building_function)):\r\n--> 726 raise TypeError(\"Tensor is unhashable. \"\r\n 727 \"Instead, use tensor.ref() as the key.\")\r\n 728 else:\r\n\r\nTypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.\r\n```\r\n\r\nHow can I fix it?",
"Now I just use the previous Kaggle notebook, even without code modifications. My code:\r\n\r\n```\r\n# This Python 3 environment comes with many helpful analytics libraries installed\r\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\r\n# For example, here's several helpful packages to load in \r\n\r\nimport numpy as np # linear algebra\r\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\r\n\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.metrics import classification_report\r\nfrom tqdm.notebook import tqdm\r\n\r\nimport tensorflow as tf\r\nfrom tensorflow import keras\r\nimport tensorflow.keras.backend as K\r\nfrom tensorflow.keras import layers\r\nfrom tensorflow.keras.utils import plot_model\r\nfrom transformers import (\r\n BertTokenizer,\r\n TFBertForSequenceClassification,\r\n TFBertModel,\r\n BertConfig,\r\n)\r\ntf.__version__\r\n\r\nMAX_SEQUENCE_LENGTH = 255\r\nPRETRAINED_MODEL_NAME = 'bert-base-uncased'\r\nBATCH_SIZE = 32\r\n\r\ndf = pd.read_csv('train.csv')\r\n\r\ndf.head()\r\n\r\ndf['target'].value_counts()\r\n\r\ndf.isnull().sum()\r\n\r\ndata = df['text'].values\r\ntargets = df['target'].values\r\n\r\ndef create_model():\r\n bert_model = TFBertModel.from_pretrained(PRETRAINED_MODEL_NAME)\r\n \r\n input_ids = layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_ids')\r\n token_type_ids = layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='token_type_ids')\r\n attention_mask = layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='attention_mask')\r\n \r\n # Use pooled_output(hidden states of [CLS]) as sentence level embedding\r\n pooled_output = bert_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})[1]\r\n x = layers.Dropout(rate=0.1)(pooled_output)\r\n x = layers.Dense(1, activation='sigmoid')(x)\r\n model = keras.models.Model(inputs={'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}, outputs=x)\r\n return model\r\n\r\ntokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)\r\nmodel = create_model()\r\n\r\nmodel.summary()\r\n\r\nplot_model(model, to_file='model.png', expand_nested=True, show_shapes=True)\r\n\r\nopt = tf.keras.optimizers.Adam(learning_rate=3e-5)\r\nmodel.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])\r\n\r\nX_train, X_val, y_train, y_val = train_test_split(data, targets, test_size=0.33, random_state=42, stratify=targets)\r\n\r\nX_train = tokenizer.batch_encode_plus(X_train, max_length=MAX_SEQUENCE_LENGTH, pad_to_max_length=True, return_tensors='tf')\r\nX_val = tokenizer.batch_encode_plus(X_val, max_length=MAX_SEQUENCE_LENGTH, pad_to_max_length=True, return_tensors='tf')\r\n\r\nhistory = model.fit(\r\n x=X_train,\r\n y=y_train,\r\n validation_data=(X_val, y_val),\r\n epochs=3,\r\n batch_size=BATCH_SIZE\r\n)\r\n```\r\n\r\nOutput:\r\n```\r\n/usr/lib/python3.8/_collections_abc.py in update(self, other, **kwds)\r\n 835 self[key] = other[key]\r\n 836 else:\r\n--> 837 for key, value in other:\r\n 838 self[key] = value\r\n 839 for key, value in kwds.items():\r\n\r\nValueError: too many values to unpack (expected 2)\r\n``` ",
"You can check [this](https://colab.research.google.com/drive/1DT6lSWRZ3CIIm9noaJxPYjtOavWQB23S?usp=sharing) and [this](https://colab.research.google.com/drive/125jJ0qrXGIe6goNrH_Ja7XPZtYp7nMXU?usp=sharing) gists for reproduction of the errors.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | I'm trying to fine tune `transformers` with my own dataset in the `csv` file. So, I found [an item in the docs which shows a basic usage example](https://huggingface.co/transformers/training.html#fine-tuning-in-native-tensorflow-2). But the main problem is that this example shows how to use transformers with the `tensorflow_datasets`, but not with the something more real, like with the `pandas` `Dataframe`. So, I have a problem with the `transformers` usage:
```
df = pd.DataFrame({'text': ['SOME ANGRY TEXT!!!', 'Some friendly text :)'], 'label': [1, 0]})
bert_model = transformers.TFBertForSequenceClassification.from_pretrained("bert-base-cased")
bert_tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-cased")
x = bert_tokenizer.batch_encode_plus(
df.text.values,
max_length=10,
add_special_tokens=True,
pad_to_max_length=True,
)
bert_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['Accuracy'])
bert_history = bert_model.fit(
x=x,
y=df.label.values.reshape(-1, 1),
)
```
Output:
```
ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {'(<class \'list\'> containing values of types {"<class \'int\'>"})'}), <class 'numpy.ndarray'>
```
**So, how can I use my custom `pandas` `DataFrame` as x and y or as dataset?**
Also, you need to understand that I found a few questions on the SO which ask almost the same question and has upvotes, [here is one of the last question which I found](https://stackoverflow.com/questions/60463829/training-tfbertforsequenceclassification-with-custom-x-and-y-data). **So, maybe people will find MWE with the `np` or `pd` useful?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5555/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/5555/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5554/comments | https://api.github.com/repos/huggingface/transformers/issues/5554/events | https://github.com/huggingface/transformers/issues/5554 | 651,723,386 | MDU6SXNzdWU2NTE3MjMzODY= | 5,554 | huggingface optimizer cannot de-serialize | {
"login": "suhasvk",
"id": 11435124,
"node_id": "MDQ6VXNlcjExNDM1MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/11435124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suhasvk",
"html_url": "https://github.com/suhasvk",
"followers_url": "https://api.github.com/users/suhasvk/followers",
"following_url": "https://api.github.com/users/suhasvk/following{/other_user}",
"gists_url": "https://api.github.com/users/suhasvk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suhasvk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suhasvk/subscriptions",
"organizations_url": "https://api.github.com/users/suhasvk/orgs",
"repos_url": "https://api.github.com/users/suhasvk/repos",
"events_url": "https://api.github.com/users/suhasvk/events{/privacy}",
"received_events_url": "https://api.github.com/users/suhasvk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Great, thanks for your analysis! Do you think you can open a PR with your proposed fix?",
"Sure, will do!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @suhasvk , Did you open a PR for the fix. I encountered the same issue. ",
"Hi @LysandreJik , is there any PR for the fix. I encountered the same issue."
] | 1,594 | 1,603 | 1,599 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
>>> import transformers
>>> opt, _ = transformers.optimization_tf.create_optimizer(init_lr=2e-5, num_train_steps=1000, num_warmup_steps=10)
>>> opt.__class__.from_config(opt.get_config())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 741, in from_config
config["learning_rate"], custom_objects=custom_objects)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/learning_rate_schedule.py", line 992, in deserialize
printable_module_name="decay")
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 292, in deserialize_keras_object
config, module_objects, custom_objects, printable_module_name)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 250, in class_and_config_for_serialized_keras_object
raise ValueError('Unknown ' + printable_module_name + ': ' + class_name)
ValueError: Unknown decay: WarmUp
## Expected behavior
One would expect that the `tf.keras.optimizers.Adam` object `opt` is successfully reconstructed from its config dictionary.
After inspecting `transformers.optimization_tf`, what I believe is happening here is that `create_optimizer` returns a regular `tf.keras.optimizers.Adam` object in the event that the `weight_decay_rate` parameter is zero:
91 else:
92 optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule, epsilon=adam_epsilon)
However, if `num_warmup_steps` is nonzero, this object is instantiated with a _custom_ `tf.keras.optimizers.schedules.LearningRateSchedule` object of type `WarmUp`:
77 if num_warmup_steps:
78 lr_schedule = WarmUp(
79 initial_learning_rate=init_lr, decay_schedule_fn=lr_schedule, warmup_steps=num_warmup_steps,
80 )
In this case the `tf.keras.optimizers.Adam` implementation of `from_config` does not know to pass this class under the `custom_objects` keyword argument, causing deserialization to fail.
In my specific setting, this causes the method `horovod.create_distributed_optimizer(opt)` to fail, since it relies on serialization / deserialization to programmatically extend the passed optimizer class.
A solution that should work in my specific setting (and more generally) is to create an intermediate class such as
class AdamWarmUp(tf.keras.optimizers.Adam):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@classmethod
def from_config(cls, config):
custom_objects = {"WarmUp":WarmUp}
return super().from_config(config, custom_objects=custom_objects)
which is returned by `create_optimizer` and correctly implements `tf.keras.optimizers.Optimizer`, and to have the `AdamWeightDecay` optimizer extend this class.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5554/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5553/comments | https://api.github.com/repos/huggingface/transformers/issues/5553/events | https://github.com/huggingface/transformers/issues/5553 | 651,720,153 | MDU6SXNzdWU2NTE3MjAxNTM= | 5,553 | Customize widget text-generation inference with prepended input | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"There is some documentation here https://huggingface.co/docs but we will add more later this week.",
"I don't think we want to mask (pun intended) too much in the inference widget what the model actually consumes, but did you see that:\r\n- you can specify your model's widget's example input, by adding metadata to your model card, cf. https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs\r\n- you can define a prefix in your config.json's `task_specific_params`, see T5 for instance: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json (cc @patrickvonplaten)",
"> * you can define a prefix in your config.json's `task_specific_params`, see T5 for instance: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json (cc @patrickvonplaten)\r\n\r\nI had not noticed it! This looks comprehensive and exactly what is needed!",
"> * you can define a prefix in your config.json's `task_specific_params`, see T5 for instance: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json (cc @patrickvonplaten)\r\n\r\nJust a quick note that the prefix here is not for text generation pipeline but I used the same idea in PR #5885 with existing \"padding_text\" variable that I turned into a task specific configuration parameter.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This was implemented"
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Some models for text generation make use of special tokens or additional initialization text.
It would be useful to add the possibility to pass additional text to the input string (before or after) and also let people strip input text.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
It is not clear how the widget API generates text.
For example with my little demo huggingtweets, I see 2 limitations:
1. I always strip input with `text.strip` and add a white space at the beginning. Otherwise we get wrong tokenization of `"Paris"` vs `" Paris"`
2. With the way my model is trained, I always want to add `<|endoftext|>` at the beginning (training adds it between every tweet)
Input such as `" this is my input "` would become, after stripping, adding white space and special token `"<|endoftext|> this is my input"` and be then passed to tokenizer.
I can see other cases where people add their own special tokens for certain tasks.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I don't think there is any public access to the API so cannot contribute there. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5553/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5552/comments | https://api.github.com/repos/huggingface/transformers/issues/5552/events | https://github.com/huggingface/transformers/pull/5552 | 651,701,158 | MDExOlB1bGxSZXF1ZXN0NDQ0OTA2MTMy | 5,552 | [Don't merge] Reformer Trax Integration Tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5552/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5552",
"html_url": "https://github.com/huggingface/transformers/pull/5552",
"diff_url": "https://github.com/huggingface/transformers/pull/5552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5552.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5551/comments | https://api.github.com/repos/huggingface/transformers/issues/5551/events | https://github.com/huggingface/transformers/pull/5551 | 651,619,255 | MDExOlB1bGxSZXF1ZXN0NDQ0ODQwMDcw | 5,551 | Fix #5544 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=h1) Report\n> Merging [#5551](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **increase** coverage by `1.10%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5551 +/- ##\n==========================================\n+ Coverage 76.79% 77.90% +1.10% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 18917 19190 +273 \n+ Misses 5717 5444 -273 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.95% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=footer). Last update [1bbc28b...f8f28f5](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5551/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5551",
"html_url": "https://github.com/huggingface/transformers/pull/5551",
"diff_url": "https://github.com/huggingface/transformers/pull/5551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5551.patch",
"merged_at": 1594048945000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5550/comments | https://api.github.com/repos/huggingface/transformers/issues/5550/events | https://github.com/huggingface/transformers/pull/5550 | 651,607,438 | MDExOlB1bGxSZXF1ZXN0NDQ0ODMwMzky | 5,550 | Fix the tokenization warning noted in #5505 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=h1) Report\n> Merging [#5550](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bc13697b1d2197a18f4ed6009e19aed315ab0f0&el=desc) will **increase** coverage by `0.56%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5550 +/- ##\n==========================================\n+ Coverage 77.30% 77.86% +0.56% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 19043 19181 +138 \n+ Misses 5591 5453 -138 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.95% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.15% <0.00%> (+2.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.69% <0.00%> (+12.69%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+13.07%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=footer). Last update [1bc1369...a9929ce](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | Fix https://github.com/huggingface/transformers/issues/5505 (unwanted warning in batch_encode_plus)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5550/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5550/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5550",
"html_url": "https://github.com/huggingface/transformers/pull/5550",
"diff_url": "https://github.com/huggingface/transformers/pull/5550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5550.patch",
"merged_at": 1594048526000
} |
https://api.github.com/repos/huggingface/transformers/issues/5549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5549/comments | https://api.github.com/repos/huggingface/transformers/issues/5549/events | https://github.com/huggingface/transformers/pull/5549 | 651,599,249 | MDExOlB1bGxSZXF1ZXN0NDQ0ODIzNjQ3 | 5,549 | The `add_space_before_punct_symbol` is only for TransfoXL | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=h1) Report\n> Merging [#5549](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **decrease** coverage by `0.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5549 +/- ##\n==========================================\n- Coverage 76.79% 76.60% -0.20% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 18917 18870 -47 \n- Misses 5717 5764 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=footer). Last update [1bbc28b...3d1ac81](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | Same fix as in the [pipelines](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L626). Not the cleanest way to do a per-model change, but is bug-free for v3.0.2.
closes https://github.com/huggingface/transformers/issues/5525 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5549/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5549",
"html_url": "https://github.com/huggingface/transformers/pull/5549",
"diff_url": "https://github.com/huggingface/transformers/pull/5549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5549.patch",
"merged_at": 1594052225000
} |
https://api.github.com/repos/huggingface/transformers/issues/5548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5548/comments | https://api.github.com/repos/huggingface/transformers/issues/5548/events | https://github.com/huggingface/transformers/issues/5548 | 651,594,737 | MDU6SXNzdWU2NTE1OTQ3Mzc= | 5,548 | Possibility to use WhitespaceSplit as pre_tokenizer instead of BPE/Sentencepiece? | {
"login": "tonytan48",
"id": 10150598,
"node_id": "MDQ6VXNlcjEwMTUwNTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/10150598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonytan48",
"html_url": "https://github.com/tonytan48",
"followers_url": "https://api.github.com/users/tonytan48/followers",
"following_url": "https://api.github.com/users/tonytan48/following{/other_user}",
"gists_url": "https://api.github.com/users/tonytan48/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonytan48/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonytan48/subscriptions",
"organizations_url": "https://api.github.com/users/tonytan48/orgs",
"repos_url": "https://api.github.com/users/tonytan48/repos",
"events_url": "https://api.github.com/users/tonytan48/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonytan48/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I was customizing a tokenizer with vocab and merges from tokenizers module. However when running intended Language Modelling task with the customized tokenizer, the class in transformers repository(PretrainedTokenizer) is not the same. I was trying to run a roberta-for-masked-lm objective with whole word tokenization scheme but seems if I were to RobertaTokenizer, it cannot support changing the 'pre_tokenizer' scheme. May I know in such case what shall I do
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5548/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5547/comments | https://api.github.com/repos/huggingface/transformers/issues/5547/events | https://github.com/huggingface/transformers/issues/5547 | 651,591,112 | MDU6SXNzdWU2NTE1OTExMTI= | 5,547 | [Feature Request] Extract Predictions from Trainer | {
"login": "geblanco",
"id": 6652222,
"node_id": "MDQ6VXNlcjY2NTIyMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6652222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geblanco",
"html_url": "https://github.com/geblanco",
"followers_url": "https://api.github.com/users/geblanco/followers",
"following_url": "https://api.github.com/users/geblanco/following{/other_user}",
"gists_url": "https://api.github.com/users/geblanco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geblanco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geblanco/subscriptions",
"organizations_url": "https://api.github.com/users/geblanco/orgs",
"repos_url": "https://api.github.com/users/geblanco/repos",
"events_url": "https://api.github.com/users/geblanco/events{/privacy}",
"received_events_url": "https://api.github.com/users/geblanco/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Unless I'm mistaken, Question-answering does support Trainer. Is a specific feature missing for your use case?",
"Hi @julien-c!\r\n\r\nAt the time of writing the PR was not merged yet (and I was not aware of it, my bad). It is great!\r\n\r\nIn any case, the previous example (the one without trainer) outputted both `predictions` and `nbest_predictions`, which are crucial for error analysis.\r\n\r\nThe new example with trainer looses this ability (because the new dataset/trainer API does not support it). Adding this feature could benefit not only SQuAD, but other datasets too (i.e.: RACE).\r\n\r\nWhat do you think on this regard?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any updates on this matter? I recently started using `transformers` and running the multiple-choice example I realized there's an output of the overall accuracy but I can't find the predictions. Currently going through the code to obtain them...",
"Hi @dianags, \r\n\r\nMight be a little late, but tired of waiting, I programmed my own solution that output predictions and some more data, the code can be found [here](https://github.com/geblanco/mc_transformers)\r\n\r\nBest,",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,607 | 1,607 | NONE | null | # 🚀 Feature request
Add the possibility to return predictions along with its example id in the new Trainer class.
## Motivation
When working with extractive QA (i.e.: SQuAD), you get back the best predictions, but, the current example for running squad uses the old, plain training/eval script, without the new Trainer class.
Additionally, there are other tasks where predictions can be extremely useful (i.e.: Multiple Choice).
Adding such functionality in the Trainer class could solve this and unify both question answering and multiple choice examples.
## Your contribution
I am familiarized with the code (both the Trainer class and the old train/eval script), so I could submit a PR with the new functionality. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5547/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5546/comments | https://api.github.com/repos/huggingface/transformers/issues/5546/events | https://github.com/huggingface/transformers/pull/5546 | 651,581,766 | MDExOlB1bGxSZXF1ZXN0NDQ0ODA5NDcw | 5,546 | GPT2 tokenizer should not output token type IDs | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | MEMBER | null | The GPT-2 and OpenAI GPT tokenizers should not output `token_type_ids` by default.
closes https://github.com/huggingface/transformers/issues/5517
code quality will pass when merged in master.
Edit (@thomwolf):
This will also fix #4922 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5546/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5546",
"html_url": "https://github.com/huggingface/transformers/pull/5546",
"diff_url": "https://github.com/huggingface/transformers/pull/5546.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5546.patch",
"merged_at": 1594049637000
} |
https://api.github.com/repos/huggingface/transformers/issues/5545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5545/comments | https://api.github.com/repos/huggingface/transformers/issues/5545/events | https://github.com/huggingface/transformers/issues/5545 | 651,576,208 | MDU6SXNzdWU2NTE1NzYyMDg= | 5,545 | Batching (TF)BertForQuestionAnswering deployment | {
"login": "rdisipio",
"id": 7974270,
"node_id": "MDQ6VXNlcjc5NzQyNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7974270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rdisipio",
"html_url": "https://github.com/rdisipio",
"followers_url": "https://api.github.com/users/rdisipio/followers",
"following_url": "https://api.github.com/users/rdisipio/following{/other_user}",
"gists_url": "https://api.github.com/users/rdisipio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rdisipio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rdisipio/subscriptions",
"organizations_url": "https://api.github.com/users/rdisipio/orgs",
"repos_url": "https://api.github.com/users/rdisipio/repos",
"events_url": "https://api.github.com/users/rdisipio/events{/privacy}",
"received_events_url": "https://api.github.com/users/rdisipio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | NONE | null | # 🚀 Feature request
It may be already possible, but I couldn't figure out how, to call TFBertForQuestionAnswering on a batch on context-question pairs rather than just one by one.
## Motivation
I tried out the same code on a machine with 16 cores and a GPU. I could process 22 pairs/sec with the CPUs, but only 11 pairs/sec with the GPU. I assume it's not efficient due to I/O. However, I couldn't find in the example such a way to call the BERT model.
For example, when I do this:
```
input_ids = tokenizer.encode(question, context, add_special_tokens=True, max_length=512)
```
I would like to promote `question` and `context` to an array of pairs so that after calling
```
start_scores, end_scores = model({'input_ids': np.array([input_ids]), # The tokens representing our input text.
'token_type_ids': np.array([segment_ids])}) # The segment IDs to differentiate question from context
```
the returned objects `start_scores` and `end_scores` should be a 2D array (or list). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5545/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5544/comments | https://api.github.com/repos/huggingface/transformers/issues/5544/events | https://github.com/huggingface/transformers/issues/5544 | 651,561,976 | MDU6SXNzdWU2NTE1NjE5NzY= | 5,544 | incorrect typehint for PreTrainedTokenizer.convert_ids_to_tokens() return value | {
"login": "andifunke",
"id": 18445361,
"node_id": "MDQ6VXNlcjE4NDQ1MzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/18445361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andifunke",
"html_url": "https://github.com/andifunke",
"followers_url": "https://api.github.com/users/andifunke/followers",
"following_url": "https://api.github.com/users/andifunke/following{/other_user}",
"gists_url": "https://api.github.com/users/andifunke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andifunke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andifunke/subscriptions",
"organizations_url": "https://api.github.com/users/andifunke/orgs",
"repos_url": "https://api.github.com/users/andifunke/repos",
"events_url": "https://api.github.com/users/andifunke/events{/privacy}",
"received_events_url": "https://api.github.com/users/andifunke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We want to make a clean patched release soon, so I opened #5551 to make sure it's fixed in it.\r\nIn the future, don't hesitate to directly open a PR for an issue with a clear fix like this :-)",
"Closed via #5551"
] | 1,594 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
The return value of `tokenization_utils.PreTrainedTokenizer.convert_ids_to_tokens()` is declared as `Union[int, List[int]]` while the method clearly returns a `Union[str, List[str]]` object type.
Would you like me to open a PR? ;)
Thanks for the great work!
Andreas
PS: Excuse me for skipping my system specs since this is a rather small issue and I'm just a bit annoyed by PyCharm's inspection warning on this ;) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5544/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5543/comments | https://api.github.com/repos/huggingface/transformers/issues/5543/events | https://github.com/huggingface/transformers/issues/5543 | 651,544,580 | MDU6SXNzdWU2NTE1NDQ1ODA= | 5,543 | t5-base translation_en_to_de BLEU lower than the paper | {
"login": "cp-pc",
"id": 55797775,
"node_id": "MDQ6VXNlcjU1Nzk3Nzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/55797775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cp-pc",
"html_url": "https://github.com/cp-pc",
"followers_url": "https://api.github.com/users/cp-pc/followers",
"following_url": "https://api.github.com/users/cp-pc/following{/other_user}",
"gists_url": "https://api.github.com/users/cp-pc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cp-pc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cp-pc/subscriptions",
"organizations_url": "https://api.github.com/users/cp-pc/orgs",
"repos_url": "https://api.github.com/users/cp-pc/repos",
"events_url": "https://api.github.com/users/cp-pc/events{/privacy}",
"received_events_url": "https://api.github.com/users/cp-pc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think the script `evaluate_wmt.py` was never really tested. Note that `t5-base` is not a fine-tuned model, but just a pretrained model. So you would definitely get better results by fine-tuning the model on translation. I'm not 100% sure, but I think the T5 paper shows the \"non-finetuned\" results for translation somewhere as well. Pinging @sshleifer - This might be interesting for as well.",
"What patrick said is exactly correct.\r\nI actually don't understand exactly which checkpoint points map to which table entries. \r\n\r\n@craffel, is 22.15 a reasonable zero shot BLEU for t5-base on en-de/?\r\nI am looking at appendix E, 3rd to rightmost column last page in [arxiv](https://arxiv.org/pdf/1910.10683.pdf), but am not sure which row corresponds to `t5-base` without finetuning. A machine readable version of that table would also be super helpful if it is easy to find.\r\n\r\n\r\nFor future reference/readers, `evaluate_wmt.py` has moved to `examples/seq2seq/run_eval.py`:\r\nand the new command (for en-romanian) is:\r\n```bash\r\nexport DATA_DIR=wmt_en_ro\r\npython run_eval.py t5-base \\\r\n $DATA_DIR/val.source t5_val_generations.txt \\\r\n --reference_path $DATA_DIR/val.target \\\r\n --score_path enro_bleu.json \\\r\n --task translation_en_to_ro \\\r\n # --n_obs 100 \\\r\n --device cuda \\\r\n --fp16 \\\r\n --bs 32\r\n```\r\nYou would need to update the first few args for your paths.\r\n\r\nI had some reasonable results finetuning mbart on WMT en-ro. Got BLEU 24 after finetuning vs 27 for mbart-large-en-ro. (both #s before preprocessing)\r\nI would be very interesting in seeing results/bug fixes for finetuning t5 on any language pair!",
"Hey all,\r\n\r\n@anonymous1100\r\n> Is it necessary to fine-tune the t5 model to reproduce the results of the paper?\r\n\r\nYes. The pre-trained checkpoints are trained on a multi-task mixture and need further fine-tuning to achieve maximal performance. See paragraph \"Multi-Task Pre-training\" in Section 3.7:\r\n\r\n> ... In Section 3.5.3, we showed that pre-training on a multi-task mixture of unsupervised and supervised tasks before fine-tuning worked as well as pre-training on the unsupervised task alone. This is the approach advocated by the “MT-DNN” [Liu et al., 2015, 2019b]. It also has the practical benefit of being able to monitor “downstream” performance for the entire duration of training, rather than just during fine-tuning. We therefore used multi-task pre-training in our final set of experiments.\r\n\r\n\r\n\r\n@patrickvonplaten \r\n> I'm not 100% sure, but I think the T5 paper shows the \"non-finetuned\" results for translation somewhere as well.\r\n\r\nNo, we never reported those numbers, but they are trivial to get by running eval on the pre-trained checkpoints, e.g. \r\n```bash\r\ngsutil -m cp -r gs://t5-data/pretrained_models/base/* \"${MODEL_DIR}\"\r\nt5_mesh_transformer \\\r\n --tpu=\"${TPU_NAME}\" \\\r\n --gcp_project=\"${PROJECT}\" \\\r\n --tpu_zone=\"${ZONE}\" \\\r\n --model_dir=\"${MODEL_DIR}\" \\\r\n --gin_file=\"gs://t5-data/pretrained_models/base/operative_config.gin\" \\\r\n --gin_file=\"eval.gin\" \\\r\n --gin_file=\"beam_search.gin\" \\\r\n --gin_param=\"MIXTURE_NAME = 'wmt_t2t_ende_v003'\" \\\r\n --gin_param=\"run.dataset_split = 'test'\" \\\r\n --gin_param=\"eval_checkpoint_step = 'all'\" \\\r\n --gin_param=\"utils.tpu_mesh_shape.tpu_topology = '2x2'\" # or whatever\r\n```\r\n\r\n\r\n\r\n@sshleifer \r\n> I actually don't understand exactly which checkpoint points map to which table entries.\r\n\r\nThe released checkpoints are (multi-task) pre-trained models which, after fine-tuning, produce the numbers in Table 14. We don't report the results before fine-tuning, and we didn't (and won't) release the fine-tuned checkpoints.\r\n\r\n> is 22.15 a reasonable zero shot BLEU for t5-base on en-de/?\r\n\r\nI ran the above command and got 28.664, so that seems very low. Not familiar with the HF eval script but I can take a look if you need ideas for figuring out what went wrong.\r\n\r\n> I am looking at appendix E, 3rd to rightmost column last page in arxiv, but am not sure which row corresponds to t5-base without finetuning.\r\n\r\nNone of the rows in that table correspond to any of the T5 models. Those numbers are the results of our giant systematic (ablation) study that we did before training any of the T5 models.\r\n\r\n> A machine readable version of that table would also be super helpful if it is easy to find.\r\n\r\nThe LaTeX source on arxiv have the tables in a format that would be easily parseable to whatever machine-readable format. https://arxiv.org/e-print/1910.10683",
"I think we figured out what went wrong. The tokenizer is not adding `eos_token=\"</s>\"` to the source document.\r\n\r\nIt should be, right?",
"The inputs should definitely have an EOS before they are fed into the model. If it's the convention in Transformers that the tokenizer takes care of that, then yes! In the T5 codebase, the tokenizer ittself does not add an EOS; that's handled by the packing and padding code.",
"Awesome! is there a `bos` token that goes before sequence (After the prefix?)\r\nlike `<s>` in Roberta/GPT2/Bart?\r\n\r\n(Is this the packing/padding code? https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py) ",
"> is there a bos token that goes before sequence (After the prefix?)\r\n\r\nNope.\r\n\r\n> Is this the packing/padding code? https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py\r\n\r\nNo, the packing/padding code is not part of the T5 codebase (T5 just provides (tokenized/preprocessed) sequences). It's assumed that it's handled by whatever the model implementation is. Here it is in the Mesh TF codebase: https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/dataset.py",
"Adding EOS does not appear to help zero shot performance in my first experiment, but open to hearing others' results. From a fork of this repo, you can run\r\n```bash\r\ngit fetch upstream\r\ngit checkout t5tok\r\n```\r\nto get a version of the tokenizer that adds EOS.\r\n\r\nWhen I ran eval on wmt_en_ro, I got\r\n```\r\nt5tok (with `</s>`):27.65\r\nmaster (no EOS): 27.87\r\n```\r\n\r\nThe commands to reproduce are in the PR [description](https://github.com/huggingface/transformers/pull/5866#issue-451879707)\r\n\r\nWould love to know results on other datasets!",
"For what it's worth I'm using T5 for other purposes (style transfer) and have found SotA results. It looks like the master branch has diverged, but among other changes I modified seq2seq.utils.encode_file to this:\r\n`lns = [prefix + text + \" </s>\" for text in lns]`",
"Hey @sshleifer , thanks for getting #5866 in. Does this resolve the discrepancy that originally started this issue, i.e. that non-fine-tuned T5-Base gets 28.664 BLEU on WMT EnDe using MTF whereas the HF version got 22.15?",
"IDK how OP got 22.1., I somehow just got BLEU 34.513 for en-de on what I thought was wmt_en_de 2019 (I can try to rerun on identical data if given pointer to such.)\r\n\r\nFor en-ro, I was getting 27.85 before the change, 27.65 after.\r\n\r\nI am using `corpus_bleu` across the whole test set.\r\n\r\ntest data:\r\n- verified that it is same as sacrebleu besides newline at end of file. \r\n\r\n### To Reproduce\r\n(21 mins on NVIDIA-RTX)\r\nGet Data:\r\n```bash\r\ncd examples/seq2seq/\r\nmkdir -p gens\r\nwget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_de.tgz\r\ntar -xzvf wmt_en_de.tar.gz\r\nexport D=${PWD}/wmt_en_de\r\n```\r\n\r\nEval Command:\r\n```\r\npython run_eval.py t5-base $D/test.source --reference_path $D/test.target gens/t5_base_ende_test_gens.txt --score_path gens/t5_base_ende_test_bleu.json --bs 16 --task translation_en_to_de\r\n```\r\n\r\n\r\nGoing to leave this open until I am satisfied that I am getting a reasonably close BLEU on reasonably close data.",
"same 34.15 from \r\n```bash\r\nsacrebleu -t wmt16 -l en-de < gens/t5_base_ende_test_gens.txt\r\n```\r\n\r\nTranslations: [here](https://raw.githubusercontent.com/sshleifer/transformers_fork/t5-gens/examples/seq2seq/t5_base_ende_test_gens.txt)",
"Hey Sam, the validation set we used was `newstest2013`. You can get the data here\r\nhttps://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2013.en\r\nhttps://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2013.de\r\nor if you want to get exactly the same preprocessed inputs and outputs, you can use \r\n```\r\npython -c 'import t5; ds = t5.data.TaskRegistry.get(\"wmt_t2t_ende_v003\").get_dataset({\"inputs\": 512, \"targets\": 512}, \"validation\")'\r\npython -m t5.scripts.dump_task --task=wmt_t2t_ende_v003 --split=validation\r\n```",
"I got `{\"bleu\": 23.8653, \"n_obs\": 3000, \"runtime\": 319, \"seconds_per_sample\": 0.1063}` on that data (`newstest2013.en`), with some sacrebleu warnings about my data not being properly detokenized.\r\n\r\n\r\n```\r\nWARNING:root:That's 100 lines that end in a tokenized period ('.')\r\nWARNING:root:It looks like you forgot to detokenize your test data, which may hurt your score.\r\nWARNING:root:If you insist your data is detokenized, or don't care, you can suppress this message with '--force'.\r\n```\r\n+ `{'bleu': 24.2978}` with by passing tokenize='intl' to sacrebleu.\r\n\r\n\r\n\r\n\r\nRan the t5 command now to try to check pre/post processing, but got:\r\n\r\n```\r\nAssertionError: Sizes do not match: 284246 vs 284253 for /home/shleifer/tensorflow_datasets/downloads/extracted/TAR_GZ.data.stat.org_wmt1_tran-task_trai-para-nc-6LWgxBgzCHdv_LtotNmnXjpCH6OhzkF8D3v10aRrznA.tgz/training-parallel-nc-v13/news-commentary-v13.de-en.de vs /home/shleifer/tensorflow_datasets/downloads/extracted/TAR_GZ.data.stat.org_wmt1_tran-task_trai-para-nc-6LWgxBgzCHdv_LtotNmnXjpCH6OhzkF8D3v10aRrznA.tgz/training-parallel-nc-v13/news-commentary-v13.de-en.en.\r\n```\r\n\r\nI think it is building more than just the val set for 1 language.\r\n\r\n",
"> I got {\"bleu\": 23.8653, \"n_obs\": 3000, \"runtime\": 319, \"seconds_per_sample\": 0.1063} on that data (newstest2013.en), with some sacrebleu warnings about my data not being properly detokenized.\r\n\r\nWe have always run the BLEU on the TFDS versions and I don't ever recall seeing that error. Maybe there is something wrong with the text files I linked? I think sacrebleu can also download the appropriate test sets.\r\n\r\n> Ran the t5 command now to try to check pre/post processing, but got:\r\n\r\nThat looks like a TFDS error, not sure how that would be happening. Do you want to open an issue in the TFDS repo and tag @adarob?",
"Yeah I can file an issue. Do you have an easy way to share your repo's generations?\r\nMine are here\r\n[t5_base_newstest2013.de](https://github.com/huggingface/transformers/files/5163691/t5_base_gens.txt)\r\n",
"Do you mean the predictions from T5 when run via the Mesh TF Transformer? Here are the inputs/targets/predictions that got spit out when I ran https://github.com/huggingface/transformers/issues/5543#issuecomment-656901662\r\n\r\n[wmttmp_test_eval_wmt_t2t_ende_v003_targets.txt](https://github.com/huggingface/transformers/files/5163804/wmttmp_test_eval_wmt_t2t_ende_v003_targets.txt)\r\n[wmttmp_test_eval_wmt_t2t_ende_v003_inputs.txt](https://github.com/huggingface/transformers/files/5163805/wmttmp_test_eval_wmt_t2t_ende_v003_inputs.txt)\r\n[wmttmp_test_eval_wmt_t2t_ende_v003_999900_predictions.txt](https://github.com/huggingface/transformers/files/5163806/wmttmp_test_eval_wmt_t2t_ende_v003_999900_predictions.txt)\r\n\r\nAlso, I apologize, I got mixed up in the span of time between when this issue started and now. This thread is about a mismatch of performance on the test set, but since this issue was re-opened last week I was thinking we were discussing the validation set. You should use `newstest2014`; that is the test set used in the paper, mentioned in https://github.com/huggingface/transformers/issues/5543#issue-651544580, and is what I ran to get the predictions above and the score in https://github.com/huggingface/transformers/issues/5543#issuecomment-656901662 Here are the corresponding text files from Stanford NLP\r\nhttps://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2014.en\r\nhttps://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2014.de",
"On that data:\r\n\r\n+ huggingface: 27.9273\r\n+ mesh: 28.6642 (from your .predictions) file.\r\n\r\nevaluation: used (pass tokenize='intl' to calculate_bleu). This improves both scores by 0.7 BLEU.",
"Cool, that is not too far off. Are you using beam search with the same hparams that we used? If so, we could maybe chalk this up to numerical differences. If not, I bet that beam search would explain a 0.7 BLEU difference.",
"That was the issue -- now equivalent! I got huggingface: 28.51 by adding `--max_length=128 --length_penalty=0.6`. (Num beams was already correct in the config.) \r\n\r\nsemi-interestingly, you can get 28.7 by adding \"translate English to German: translate English to German:\" (twice) to every source example (I was doing this by accident).",
"Awesome, great sleuthing! Good to hear that there is no disparity here."
] | 1,594 | 1,599 | 1,599 | NONE | null | I downloaded the "newstest2014.en" and "newstest2014.de" datasets. Then I used examples/translation/t5/evaluate_wmt.py to evaluate the BLEU value of en_to_de, and the BLEU finally obtained is equal to 22.15, which is much lower than the paper. I used the t5-base model and my transformers version is 2.11.0.
Is there something wrong with my operation? Is it necessary to fine-tune the t5 model to reproduce the results of the paper? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5542/comments | https://api.github.com/repos/huggingface/transformers/issues/5542/events | https://github.com/huggingface/transformers/pull/5542 | 651,474,427 | MDExOlB1bGxSZXF1ZXN0NDQ0NzIzNDE2 | 5,542 | QA pipeline should mask CLS tokens after handling impossible answer | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is the pytorch test failure unrelated here @mfuntowicz ?",
"Will check this :)",
"Closed as fix was included in #5496 "
] | 1,594 | 1,651 | 1,594 | MEMBER | null | Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5542/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5542",
"html_url": "https://github.com/huggingface/transformers/pull/5542",
"diff_url": "https://github.com/huggingface/transformers/pull/5542.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5542.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5541/comments | https://api.github.com/repos/huggingface/transformers/issues/5541/events | https://github.com/huggingface/transformers/issues/5541 | 651,465,411 | MDU6SXNzdWU2NTE0NjU0MTE= | 5,541 | High F1 score. But poor accuracy during Inference due to tokenisation | {
"login": "sudharsan2020",
"id": 9370130,
"node_id": "MDQ6VXNlcjkzNzAxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9370130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sudharsan2020",
"html_url": "https://github.com/sudharsan2020",
"followers_url": "https://api.github.com/users/sudharsan2020/followers",
"following_url": "https://api.github.com/users/sudharsan2020/following{/other_user}",
"gists_url": "https://api.github.com/users/sudharsan2020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sudharsan2020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sudharsan2020/subscriptions",
"organizations_url": "https://api.github.com/users/sudharsan2020/orgs",
"repos_url": "https://api.github.com/users/sudharsan2020/repos",
"events_url": "https://api.github.com/users/sudharsan2020/events{/privacy}",
"received_events_url": "https://api.github.com/users/sudharsan2020/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Why do you believe the tokenization to be the issue here?",
"@LysandreJik Thanks for reaching out.\r\n\r\nPlease find my observations with the inconsistency in the Tokenizer(possible issue), since I was using the HuggingFace provided script for training the custom NER Model.\r\n\r\n**1. Expected name:**\r\nAUDLEY THOMPSON\r\n\r\nPredicted name:\r\n{'entity_group': 'B-PER', 'score': 0.9993636608123779, 'word': 'AUDLE'}, \r\n{'entity_group': 'I-PER', 'score': 0.8126876294612885, 'word': '##Y THOMPS'}\r\n\r\n**Issue:**\r\nLast two letters got skipped\r\n\r\n**2. Expected name:**\r\nDANIEL, BROWN\r\n\r\nPredicted name:\r\n{'entity_group': 'B-PER', 'score': 0.9559168517589569, 'word': 'DAN'}, \r\n{'entity_group': 'I-PER', 'score': 0.9092316627502441, 'word': '##IE'}, \r\n{'entity_group': 'B-PER', 'score': 0.5071505904197693, 'word': '##L'}, \r\n{'entity_group': 'I-PER', 'score': 0.849787175655365, 'word': ', BROWN'}\r\n\r\n**Issue:**\r\nThe wordpiece tokenizer splits the begin entity into smaller pieces. However model predicts that as an \"I-PER\" entity which makes it really difficult to merge continuous entities\r\n\r\n\r\n**3. Expected name:**\r\nVINEY, PAJTSHIA\r\n\r\nPredicted name:\r\n{'entity_group': 'B-PER', 'score': 0.9991838335990906, 'word': 'VI'}, \r\n{'entity_group': 'I-PER', 'score': 0.9591831763585409, 'word': '##Y , PA'} \r\n{'entity_group': 'I-PER', 'score': 0.7927274107933044, 'word': '##IA'}\r\n\r\n**Issue:**\r\n'NE' word is missed in the name: 'VINEY'\r\n'JTSH' word is missed in the name: 'PAJTSHIA'\r\n\r\n**4. Expected name:**\r\nPierson, Garcia \r\n\r\nPredicted name:\r\n{'entity_group': 'B-PER', 'score': 0.9972472190856934, 'word': 'Pierson'}, \r\n{'entity_group': 'I-PER', 'score': 0.8200799822807312, 'word': 'GA'}, \r\n{'entity_group': 'I-PER', 'score': 0.8131067156791687, 'word': '##IA'}\r\n\r\n**Issue:**\r\n'RC' word is missed in the name: 'Garcia'\r\n\r\nPlease let me know if I am missing something.\r\n**Missing characters** and **split tokens** are major reasons for the **accuracy drop** while merging the Begin(**B-PER**) and Info(**I-PER**) entities.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | NONE | null | # 🐛 Bug
## Information
I am using **Bert-Base-cased** model to train my custom Named entity recognition(NER) model with a sequence length of **512**.
Language I am using the model on: **English**
The problem arises when using:
* [ ] the official example scripts: **token-classification/run_ner.py**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: **Named entity recognition**
* [ ] my own task or dataset: **Custom Dataset**
## To reproduce
Steps to reproduce the behavior:
1.Use the default NER Pipeline to load the custom trained model
` self.model_prediction_pipeline = pipeline(
"ner",
model=model_path,
tokenizer= model_path,
grouped_entities=True
)`
2. I've attached the Evaluation results of the model.
`eval_loss = 0.021479165139844086`
`eval_precision = 0.8725970149253731`
`eval_recall = 0.8868932038834951`
`eval_f1 = 0.8796870297923562`
`epoch = 5.0`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. Model should produce a good accuracy corresponding to the F1 score.
2. However during Inference, I am not getting an accuracy over **30%**
3. Not sure if the inconsistent tokenisation leads to poor results.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5540/comments | https://api.github.com/repos/huggingface/transformers/issues/5540/events | https://github.com/huggingface/transformers/issues/5540 | 651,443,634 | MDU6SXNzdWU2NTE0NDM2MzQ= | 5,540 | MobileBert embedding vectors values | {
"login": "datapaintings",
"id": 38192773,
"node_id": "MDQ6VXNlcjM4MTkyNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/38192773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datapaintings",
"html_url": "https://github.com/datapaintings",
"followers_url": "https://api.github.com/users/datapaintings/followers",
"following_url": "https://api.github.com/users/datapaintings/following{/other_user}",
"gists_url": "https://api.github.com/users/datapaintings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datapaintings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datapaintings/subscriptions",
"organizations_url": "https://api.github.com/users/datapaintings/orgs",
"repos_url": "https://api.github.com/users/datapaintings/repos",
"events_url": "https://api.github.com/users/datapaintings/events{/privacy}",
"received_events_url": "https://api.github.com/users/datapaintings/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Firstly, in your code you're using a `MobileBertModel` with tensorflow inputs; does that work? You should either use torch inputs or use `TFMobileBertModel` instead.\r\n\r\nThe model outputs have no reason to be bound between [-1, 1], as they're logits. You can pass them through a softmax layer to get the total sum to be 1 and have each result bound between -1 and 1!",
"Hello, thank you for your answer. I modified the function according to your suggestion, applying softmax on outputs to get hidden states values:\r\n\r\n```\r\ndef softmax(x):\r\n e_x = np.exp((x - np.max(x))/2000000)\r\n return e_x / e_x.sum(axis=0)\r\n\r\ndef BERT_embeddings2(test_x, max_seq_len):\r\n tokenizer = MobileBertTokenizer.from_pretrained('google/mobilebert-uncased')\r\n model = TFMobileBertModel.from_pretrained('google/mobilebert-uncased')\r\n batches = np.array_split(test_x, 100) \r\n encoded = None\r\n for batch in batches:\r\n encoded_text = tokenizer.batch_encode_plus(batch.values, max_length=max_seq_len, pad_to_max_length=True, truncation=True)\r\n input_ids = tf.constant(encoded_text['input_ids'])#[None, :]\r\n outputs = model(input_ids)\r\n hidden_states = softmax(outputs[1])\r\n if encoded is None:\r\n encoded = hidden_states\r\n else:\r\n encoded = np.concatenate((encoded, hidden_states), axis=0)\r\n```\r\nThen I used encoded data to run experiment on multiple neural network architectures. What I observed, is that the loss function does not converge and is similar for various network shapes. This is not appearing while training on (not mobile) Bert Model. Do you know what can be the reason?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): MobileBert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```py
def BERT_embeddings(data, max_seq_len):
tokenizer = MobileBertTokenizer.from_pretrained('google/mobilebert-uncased')
model = MobileBertModel.from_pretrained('google/mobilebert-uncased')
batches = np.array_split(data, 5)
encoded = None
for batch in batches:
encoded_text = tokenizer.batch_encode_plus(batch.values, max_length=max_seq_len, pad_to_max_length=True, truncation=True)
input_ids = tf.constant(encoded_text['input_ids'])#[None, :]
#attention_masks = encoded_data_train['attention_mask']
outputs = model(input_ids)
if encoded == None:
encoded = outputs[1]
else:
encoded = np.concatenate((encoded, outputs[1]), axis=0)
encoded_test_x = pd.DataFrame(data=encoded, index=test_x.index)
return encoded_test_x
print(BERT_embeddings(data, max_seq_len=18)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I would like to create sentence embeddings with MobileBert
## To reproduce
Steps to reproduce the behavior:
1. 'text' is one column pandas object
2. run above code
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Embeddings have a form of vectors with large values like [-24888514.0 222398.593750...]
## Expected behavior
Should have [-1, 1] range
<!-- A clear and concise description of what you would expect to happen. -->
I would expect that elements of a vector will not be in range hundred of thousands but [-1, 1]
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.1
- Platform:
- Python version: 3.7.3
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5540/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5539/comments | https://api.github.com/repos/huggingface/transformers/issues/5539/events | https://github.com/huggingface/transformers/issues/5539 | 651,437,349 | MDU6SXNzdWU2NTE0MzczNDk= | 5,539 | Fine Tuning Using /question-answering/run_squad.py | {
"login": "anirbansaha96",
"id": 52232270,
"node_id": "MDQ6VXNlcjUyMjMyMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/52232270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anirbansaha96",
"html_url": "https://github.com/anirbansaha96",
"followers_url": "https://api.github.com/users/anirbansaha96/followers",
"following_url": "https://api.github.com/users/anirbansaha96/following{/other_user}",
"gists_url": "https://api.github.com/users/anirbansaha96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anirbansaha96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anirbansaha96/subscriptions",
"organizations_url": "https://api.github.com/users/anirbansaha96/orgs",
"repos_url": "https://api.github.com/users/anirbansaha96/repos",
"events_url": "https://api.github.com/users/anirbansaha96/events{/privacy}",
"received_events_url": "https://api.github.com/users/anirbansaha96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, there's a [readme](https://github.com/huggingface/transformers/tree/master/examples/question-answering) in `question-answering`!",
"Yes, I've gone through this resource. What I want to achieve is to fine-tune on a custom resource and not on SQuAD dataset. I want to train it on Question-Answer pairs.\r\nYou can find more about the issue [here](https://ai.stackexchange.com/questions/22358/how-to-fine-tune-bert-for-question-answering/22362?noredirect=1#comment34182_22362)."
] | 1,594 | 1,594 | 1,594 | NONE | null | How do I use `/question-answering/run_squad.py` to fine-tune my own custom dataset?
[StackOverflow][1]
[1]: https://stackoverflow.com/questions/62752709/how-to-fine-tune-bert-for-question-answering | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5539/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5538/comments | https://api.github.com/repos/huggingface/transformers/issues/5538/events | https://github.com/huggingface/transformers/issues/5538 | 651,394,176 | MDU6SXNzdWU2NTEzOTQxNzY= | 5,538 | Easier way to download pretrained model files to local | {
"login": "m0hit-aggarwal",
"id": 33771685,
"node_id": "MDQ6VXNlcjMzNzcxNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/33771685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m0hit-aggarwal",
"html_url": "https://github.com/m0hit-aggarwal",
"followers_url": "https://api.github.com/users/m0hit-aggarwal/followers",
"following_url": "https://api.github.com/users/m0hit-aggarwal/following{/other_user}",
"gists_url": "https://api.github.com/users/m0hit-aggarwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m0hit-aggarwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m0hit-aggarwal/subscriptions",
"organizations_url": "https://api.github.com/users/m0hit-aggarwal/orgs",
"repos_url": "https://api.github.com/users/m0hit-aggarwal/repos",
"events_url": "https://api.github.com/users/m0hit-aggarwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/m0hit-aggarwal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any news on this? We have the same problem when running a Docker-container (it takes a while to load the model all the time)",
"Hey! The recommended way of handling that is to use `from_pretrained`/`save_pretrained` to a directory, and to load from that directory from then on.\r\n\r\nIf for some reason that's not sufficient, you can use the `huggingface_hub` library as shown in the [following guide](https://huggingface.co/docs/huggingface_hub/how-to-downstream). I believe repositories cannot be downloaded as zips currently, cc @osanseviero @julien-c "
] | 1,594 | 1,650 | 1,599 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarily intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Is there any why we can save vocab and models files to local without having to run the following code with cache_dir parameter. What i am asking is some utility or some endpoint from where we could get a tar of all model files.
```
from transformers import BartTokenizer, BartForSequenceClassification
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large',cache_dir='path/to/local_dir')
model = BartForSequenceClassification.from_pretrained('facebook/bart-large',cache_dir='path/to/local_dir')
```
The reason for such a requirement is, we run our code in VMs where there is no internet access, so we have to run the code in our local for every model and save files. It would be helpful if there is a easier way to download all the files for pretrained models as a tar or zip file.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5538/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5538/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5537/comments | https://api.github.com/repos/huggingface/transformers/issues/5537/events | https://github.com/huggingface/transformers/issues/5537 | 651,392,133 | MDU6SXNzdWU2NTEzOTIxMzM= | 5,537 | Longformer - Compression | {
"login": "Nouman97",
"id": 42269506,
"node_id": "MDQ6VXNlcjQyMjY5NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/42269506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nouman97",
"html_url": "https://github.com/Nouman97",
"followers_url": "https://api.github.com/users/Nouman97/followers",
"following_url": "https://api.github.com/users/Nouman97/following{/other_user}",
"gists_url": "https://api.github.com/users/Nouman97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nouman97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nouman97/subscriptions",
"organizations_url": "https://api.github.com/users/Nouman97/orgs",
"repos_url": "https://api.github.com/users/Nouman97/repos",
"events_url": "https://api.github.com/users/Nouman97/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nouman97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You could check out the example script under distillation: `/home/patrick/hugging_face/transformers/examples/distillation`",
"Did you upload the model to the model hub? I would be great so that we can take a closer look :-) ",
"Hello, thanks for the reply. I checked the train.py script under examples/distillation, but it seems that it does not cater to longformer models (the mentioned models within the script are BERT, RoBERTa, and GPT2). Yes, I have uploaded the model: https://s3.amazonaws.com/models.huggingface.co/bert/Nomi97/Chatbot_QA.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | NONE | null | Hello, I have trained a longformer model on my custom question answering dataset, and now I wanted to know if there is a way to compress this trained model?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5537/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5536/comments | https://api.github.com/repos/huggingface/transformers/issues/5536/events | https://github.com/huggingface/transformers/pull/5536 | 651,345,420 | MDExOlB1bGxSZXF1ZXN0NDQ0NjE3NTQ3 | 5,536 | Create README | {
"login": "DeepsMoseli",
"id": 29062994,
"node_id": "MDQ6VXNlcjI5MDYyOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/29062994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeepsMoseli",
"html_url": "https://github.com/DeepsMoseli",
"followers_url": "https://api.github.com/users/DeepsMoseli/followers",
"following_url": "https://api.github.com/users/DeepsMoseli/following{/other_user}",
"gists_url": "https://api.github.com/users/DeepsMoseli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeepsMoseli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeepsMoseli/subscriptions",
"organizations_url": "https://api.github.com/users/DeepsMoseli/orgs",
"repos_url": "https://api.github.com/users/DeepsMoseli/repos",
"events_url": "https://api.github.com/users/DeepsMoseli/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeepsMoseli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=h1) Report\n> Merging [#5536](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.14%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5536 +/- ##\n==========================================\n- Coverage 77.83% 76.69% -1.15% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 19175 18892 -283 \n- Misses 5459 5742 +283 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=footer). Last update [58cca47...30f3554](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5536/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5536",
"html_url": "https://github.com/huggingface/transformers/pull/5536",
"diff_url": "https://github.com/huggingface/transformers/pull/5536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5536.patch",
"merged_at": 1594118296000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5535/comments | https://api.github.com/repos/huggingface/transformers/issues/5535/events | https://github.com/huggingface/transformers/issues/5535 | 651,321,670 | MDU6SXNzdWU2NTEzMjE2NzA= | 5,535 | How i can set the special token <|endoftext|> to an other id ? | {
"login": "Nkonstan",
"id": 35643708,
"node_id": "MDQ6VXNlcjM1NjQzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/35643708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nkonstan",
"html_url": "https://github.com/Nkonstan",
"followers_url": "https://api.github.com/users/Nkonstan/followers",
"following_url": "https://api.github.com/users/Nkonstan/following{/other_user}",
"gists_url": "https://api.github.com/users/Nkonstan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nkonstan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nkonstan/subscriptions",
"organizations_url": "https://api.github.com/users/Nkonstan/orgs",
"repos_url": "https://api.github.com/users/Nkonstan/repos",
"events_url": "https://api.github.com/users/Nkonstan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nkonstan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
" Now is set to 0 and let's say i want to change it to 50256 for GPT-2 .",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5535/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5534/comments | https://api.github.com/repos/huggingface/transformers/issues/5534/events | https://github.com/huggingface/transformers/issues/5534 | 651,300,017 | MDU6SXNzdWU2NTEzMDAwMTc= | 5,534 | How-to-fine-tune-bert-for-question-answering? | {
"login": "anirbansaha96",
"id": 52232270,
"node_id": "MDQ6VXNlcjUyMjMyMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/52232270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anirbansaha96",
"html_url": "https://github.com/anirbansaha96",
"followers_url": "https://api.github.com/users/anirbansaha96/followers",
"following_url": "https://api.github.com/users/anirbansaha96/following{/other_user}",
"gists_url": "https://api.github.com/users/anirbansaha96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anirbansaha96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anirbansaha96/subscriptions",
"organizations_url": "https://api.github.com/users/anirbansaha96/orgs",
"repos_url": "https://api.github.com/users/anirbansaha96/repos",
"events_url": "https://api.github.com/users/anirbansaha96/events{/privacy}",
"received_events_url": "https://api.github.com/users/anirbansaha96/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | null | [] | [
"Hi, how did you solve this project? I want to do the same in another domain and I want to know what is the best approach."
] | 1,594 | 1,697 | 1,594 | NONE | null | I wish to train two domain-specific models:
Domain 1: Constitution and related Legal Documents
Domain 2: Technical and related documents.
For Domain 1, I've access to a text-corpus with texts from the constitution and no question-context-answer tuples. For Domain 2, I've access to Question-Answer pairs.
Is it possible to fine-tune a light-weight BERT model for Question-Answering using just the data mentioned above?
If yes, what are the resources to achieve this task?
Some examples, from the huggingface/models library would be mrm8488/bert-tiny-5-finetuned-squadv2, sshleifer/tiny-distilbert-base-cased-distilled-squad, /twmkn9/albert-base-v2-squad2.
[You can find the question here.](https://datascience.stackexchange.com/questions/77213/how-to-fine-tune-bert-for-question-answering) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5533/comments | https://api.github.com/repos/huggingface/transformers/issues/5533/events | https://github.com/huggingface/transformers/pull/5533 | 651,214,916 | MDExOlB1bGxSZXF1ZXN0NDQ0NTExODM3 | 5,533 | [wip] Label smooth | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5533/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5533",
"html_url": "https://github.com/huggingface/transformers/pull/5533",
"diff_url": "https://github.com/huggingface/transformers/pull/5533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5533.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5532/comments | https://api.github.com/repos/huggingface/transformers/issues/5532/events | https://github.com/huggingface/transformers/issues/5532 | 651,178,431 | MDU6SXNzdWU2NTExNzg0MzE= | 5,532 | Error in Loading bert-large-uncased-whole-word-masking-finetuned-squad | {
"login": "hi-weiyuan",
"id": 34810978,
"node_id": "MDQ6VXNlcjM0ODEwOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/34810978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-weiyuan",
"html_url": "https://github.com/hi-weiyuan",
"followers_url": "https://api.github.com/users/hi-weiyuan/followers",
"following_url": "https://api.github.com/users/hi-weiyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-weiyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-weiyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-weiyuan/subscriptions",
"organizations_url": "https://api.github.com/users/hi-weiyuan/orgs",
"repos_url": "https://api.github.com/users/hi-weiyuan/repos",
"events_url": "https://api.github.com/users/hi-weiyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-weiyuan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Sounds like your downloads were corrupted.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
In order to use bert-large-uncased-whole-word-masking-finetuned-squad, I firstly download it from https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json and https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin. Then, as I used other pretrained model before, I change the files name into config.json and pytorch_model.bin. Finally, I run code bert = BertForQuestionAnswering.from_pretrained(root_path + "/bert-large-uncased-whole-word-masked-finetuned-squad/") and meat some errors.
The error info shows as follows:
File "/home/dl/anaconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 662, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
*** Error in `/home/dl/anaconda3/bin/python3': corrupted double-linked list: 0x000055e3278c88b0 ***
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5531/comments | https://api.github.com/repos/huggingface/transformers/issues/5531/events | https://github.com/huggingface/transformers/pull/5531 | 651,156,633 | MDExOlB1bGxSZXF1ZXN0NDQ0NDY3OTEz | 5,531 | fixed ImportError: cannot import name 'hf_bucket_url' on convert_pytorch_checkpoint_to_tf2.py | {
"login": "mohataher",
"id": 1592974,
"node_id": "MDQ6VXNlcjE1OTI5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1592974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohataher",
"html_url": "https://github.com/mohataher",
"followers_url": "https://api.github.com/users/mohataher/followers",
"following_url": "https://api.github.com/users/mohataher/following{/other_user}",
"gists_url": "https://api.github.com/users/mohataher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohataher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohataher/subscriptions",
"organizations_url": "https://api.github.com/users/mohataher/orgs",
"repos_url": "https://api.github.com/users/mohataher/repos",
"events_url": "https://api.github.com/users/mohataher/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohataher/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`from transformers.file_utils import default_cache_path, hf_bucket_url`\r\nI want to import hf_bucket_url on Colab, but I still got the error\r\n\"ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' (/usr/local/lib/python3.9/dist-packages/transformers/file_utils.py)\"\r\nAm I do something wrong?"
] | 1,593 | 1,679 | 1,594 | CONTRIBUTOR | null | So I installed the latest version of transformers on Google Colab
!pip install transformers
When trying to invoke the conversion file using
!python /usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py .py --help
Or trying to use
from transformers.file_utils import hf_bucket_url. // works
from transformers.convert_pytorch_checkpoint_to_tf2 import *. // fails
convert_pytorch_checkpoint_to_tf("gpt2", pytorch_file, config_file, tf_file).
I get this error
ImportError Traceback (most recent call last)
<ipython-input-3-dadaf83ecea0> in <module>()
1 from transformers.file_utils import hf_bucket_url
----> 2 from transformers.convert_pytorch_checkpoint_to_tf2 import *
3
4 convert_pytorch_checkpoint_to_tf("gpt2", pytorch_file, config_file, tf_file)
/usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py in <module>()
20 import os
21
---> 22 from transformers import (
23 ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP,
24 BERT_PRETRAINED_CONFIG_ARCHIVE_MAP,
ImportError: cannot import name 'hf_bucket_url'
Turns out, there's a problem in the import of `hf_bucket_url`. Once fixed, the code ran properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5531/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5531",
"html_url": "https://github.com/huggingface/transformers/pull/5531",
"diff_url": "https://github.com/huggingface/transformers/pull/5531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5531.patch",
"merged_at": 1594043711000
} |
https://api.github.com/repos/huggingface/transformers/issues/5530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5530/comments | https://api.github.com/repos/huggingface/transformers/issues/5530/events | https://github.com/huggingface/transformers/issues/5530 | 651,140,258 | MDU6SXNzdWU2NTExNDAyNTg= | 5,530 | Tabert | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"In their readme they say that the implementation to this project is still WIP, but they said this on release.\r\nIt should be really easy to add it here while they are already using this library in their implementation.\r\nDoes the huggingface team has more information about this, if the community can open an PR for it or waiting for the original authors ?",
"Looking forward to this new model.",
"Hello all!\r\n\r\nI looked at the list of models in the transformers site and TaBERT is still not listed. Does anyone know when it is going to be ready?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Not a stale, still cant see TaBert on HuggingFace List\r\n\r\n"
] | 1,593 | 1,613 | null | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/facebookresearch/TaBERT
* [X] the model weights are available: (give details)
https://github.com/facebookresearch/TaBERT
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5530/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5529/comments | https://api.github.com/repos/huggingface/transformers/issues/5529/events | https://github.com/huggingface/transformers/pull/5529 | 651,127,811 | MDExOlB1bGxSZXF1ZXN0NDQ0NDQ4MTUz | 5,529 | [fix] pin sacrebleu to fix CI ImportError | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=h1) Report\n> Merging [#5529](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.34%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5529 +/- ##\n==========================================\n- Coverage 77.83% 77.49% -0.35% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 19175 19090 -85 \n- Misses 5459 5544 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=footer). Last update [58cca47...e21636a](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5529/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5529",
"html_url": "https://github.com/huggingface/transformers/pull/5529",
"diff_url": "https://github.com/huggingface/transformers/pull/5529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5529.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5528/comments | https://api.github.com/repos/huggingface/transformers/issues/5528/events | https://github.com/huggingface/transformers/issues/5528 | 651,123,026 | MDU6SXNzdWU2NTExMjMwMjY= | 5,528 | 3.0.1: "unexpected keyword argument 'is_pretokenized'" when using batch_encode_plus() w/ Fast Tokenizers | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"I must admit that I am confused here. I don't see how this is possible with one of the `0.8.0` version of `tokenizers`, they should have `is_pretokenized` as expected argument of both `encode` and `encode_batch`. Unfortunately, I was not able to reproduce it using the snippet you provided. \r\n\r\nWould it be possible that you share a Google Colab showing this bug?",
"Hmm, it doesn't appear to be happening on Colab: https://colab.research.google.com/drive/1TCGbkP63BAwKHFob9YvP_PE7-8cyAZ3b?usp=sharing\r\n\r\nI have seen other weird issues with tokenizers so it might be an issue on my local config; however the versions are definitely correct, which is baffling.",
"If you can reproduce this on your local config, can you check `tokenizers.__version__`? I suspect that for some reason you had a previous version that didn't include this argument. Sometimes I have weird bugs with `pip` that I don't really understand either, and so it might be reporting a version different from the one actually loaded.",
"Ugh, yes, `tokenizers.__version__` returned 0.7.0. (seems like it didn't get overwritten when upgrading to `transformers` 3.0.1)\r\n\r\nAfter uninstalling tokenizers (twice for some reason) via `pip`, then reinstalling 0.8.0rc4, it works now.\r\n\r\nThanks for the help! Closing."
] | 1,593 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
See title. Does not occur with "slow" tokenizers.
## To reproduce
```python
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
texts = ['I am a teapot', 'Short and stout']
tokenizer.batch_encode_plus(texts)
```
## Trace
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-a99759a2bc25> in <module>
1 texts = ['I am a teapot', 'Short and stout']
2
----> 3 tokenizer.batch_encode_plus(texts)
/usr/local/lib/python3.8/site-packages/transformers-3.0.1-py3.8.egg/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
1813 )
1814
-> 1815 return self._batch_encode_plus(
1816 batch_text_or_text_pairs=batch_text_or_text_pairs,
1817 add_special_tokens=add_special_tokens,
/usr/local/lib/python3.8/site-packages/transformers-3.0.1-py3.8.egg/transformers/tokenization_gpt2.py in _batch_encode_plus(self, *args, **kwargs)
365 )
366
--> 367 return super()._batch_encode_plus(*args, **kwargs)
368
369 def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
/usr/local/lib/python3.8/site-packages/transformers-3.0.1-py3.8.egg/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
340 encodings = [encodings]
341 else:
--> 342 encodings = self._tokenizer.encode_batch(
343 batch_text_or_text_pairs, add_special_tokens=add_special_tokens, is_pretokenized=is_pretokenized
344 )
TypeError: encode_batch() got an unexpected keyword argument 'is_pretokenized'
```
## Environment info
- `transformers` version: 3.0.1
- `tokenizers` version: 0.8.0rc4
- Platform: macOS
- Python version: 3.8.2
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5528/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5527/comments | https://api.github.com/repos/huggingface/transformers/issues/5527/events | https://github.com/huggingface/transformers/issues/5527 | 651,105,412 | MDU6SXNzdWU2NTExMDU0MTI= | 5,527 | Tf to pytorch | {
"login": "Sagar1094",
"id": 54572031,
"node_id": "MDQ6VXNlcjU0NTcyMDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/54572031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sagar1094",
"html_url": "https://github.com/Sagar1094",
"followers_url": "https://api.github.com/users/Sagar1094/followers",
"following_url": "https://api.github.com/users/Sagar1094/following{/other_user}",
"gists_url": "https://api.github.com/users/Sagar1094/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sagar1094/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sagar1094/subscriptions",
"organizations_url": "https://api.github.com/users/Sagar1094/orgs",
"repos_url": "https://api.github.com/users/Sagar1094/repos",
"events_url": "https://api.github.com/users/Sagar1094/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sagar1094/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"#Code used to convert from tf to Pytorch\r\n\r\nimport torch\r\n\r\nfrom transformers import (\r\n CONFIG_NAME,\r\n WEIGHTS_NAME,\r\n XLNetConfig,\r\n XLNetForQuestionAnswering,\r\n XLNetForSequenceClassification,\r\n XLNetLMHeadModel,\r\n load_tf_weights_in_xlnet,\r\n XLNetModel\r\n)\r\n\r\n\r\ntf_checkpoint_path=\"./model/\"\r\nxlnet_config_file = \"./model/config.json\"\r\npytorch_dump_path=\"./xlnetpytorch/\"\r\n\r\nconfig = XLNetConfig.from_json_file(xlnet_config_file)\r\nprint(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\nmodel = XLNetForSequenceClassification(config)\r\n\r\n#Load weights from tf checkpoint\r\nmodel = load_tf_weights_in_xlnet(model, config, tf_checkpoint_path)\r\n\r\n#Save pytorch-model\r\nprint(\"Save PyTorch model to {}\".format(pytorch_dump_path))\r\nmodel.save_pretrained(pytorch_dump_path)",
"Use config.num_labels = 536 (number of labels) in my case to define the number of labels in custom data set before initialising model i.e., before line model = XLNetForSequenceClassification(config)"
] | 1,593 | 1,594 | 1,594 | NONE | null | Hi, I have XLNet checkpoint file and want to convert it to pytorch. I have converted the file to pytorch and when I am using the same file with simpletramsformers for text classification. It says the number of labels is not matching. Where should I specify the number of classes while converting from tf to pytorch? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5527/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5526/comments | https://api.github.com/repos/huggingface/transformers/issues/5526/events | https://github.com/huggingface/transformers/pull/5526 | 651,096,185 | MDExOlB1bGxSZXF1ZXN0NDQ0NDI2MjM2 | 5,526 | Fix `RobertaClassificationHead` style consistency. | {
"login": "ranamihir",
"id": 8270471,
"node_id": "MDQ6VXNlcjgyNzA0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8270471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranamihir",
"html_url": "https://github.com/ranamihir",
"followers_url": "https://api.github.com/users/ranamihir/followers",
"following_url": "https://api.github.com/users/ranamihir/following{/other_user}",
"gists_url": "https://api.github.com/users/ranamihir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranamihir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranamihir/subscriptions",
"organizations_url": "https://api.github.com/users/ranamihir/orgs",
"repos_url": "https://api.github.com/users/ranamihir/repos",
"events_url": "https://api.github.com/users/ranamihir/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranamihir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=h1) Report\n> Merging [#5526](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **increase** coverage by `0.56%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5526 +/- ##\n==========================================\n+ Coverage 77.83% 78.39% +0.56% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 19175 19313 +138 \n+ Misses 5459 5321 -138 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.04% <100.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=footer). Last update [58cca47...21fae42](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I understand why it's inconsistent, but it would introduce a breaking change to the `RobertaClassificationHead` and `TFRobertaClassificationHead` objects!",
"@LysandreJik Thanks, yes, that's correct. Do you think this is something that may be incorporated in a future major release, or should I close this MR?",
"As an alternative, I've been using a simple [SequencePooler](https://github.com/ranamihir/pytorch_common/blob/ec2d9baa57d3cb5ac80232bea75ea0a49a6ba100/pytorch_common/utils.py#L912-L993) class to provide a generic way of extracting the pooled representation from different models. I'd be happy to create a PR for this if you think it could be useful.\r\n\r\n```\r\n>>> from pytorch_common.utils import SequencePooler\r\n>>> pooler = SequencePooler(\"bert\")\r\n>>> pooler\r\nSequencePooler(model_type=bert)\r\n>>> pooler = SequencePooler(\"roberta\")\r\n>>> pooler\r\nSequencePooler(model_type=roberta)\r\n>>> pooler = SequencePooler(\"dummy\")\r\nWARNING:root:No supported sequence pooler was found for model of type 'dummy'. Using the default one.\r\n>>> pooler\r\nSequencePooler(model_type=default)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,601 | 1,601 | NONE | null | There's a slight inconsistency in `RobertaClassificationHead` in that it takes in the whole sequence output from the `RobertaModel`, and extracts the pooled output inside its own forward method, seen [here](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_roberta.py#L561).
This is different from other models, where the pooled output is computed beforehand and directly expected as input in the classifier. E.g. in [`BertForSequenceClassification`](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_bert.py#L1270), [`DistilBertForSequenceClassification`](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_distilbert.py#L631), [`BartForSequenceClassification`](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_bart.py#L1138), etc.
This mainly addresses this issue in `modeling_roberta.py` and `modeling_tf_roberta.py`. Additionally, some minor aesthetic changes are made to these files in order to pass the black / sort code quality checks.
Note: This PR is a duplicate of #4107 with minor changes made to pass code quality checks. Closed that one since it was outdated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5526/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5526",
"html_url": "https://github.com/huggingface/transformers/pull/5526",
"diff_url": "https://github.com/huggingface/transformers/pull/5526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5526.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5525/comments | https://api.github.com/repos/huggingface/transformers/issues/5525/events | https://github.com/huggingface/transformers/issues/5525 | 651,054,384 | MDU6SXNzdWU2NTEwNTQzODQ= | 5,525 | WARNING:transformers.tokenization_utils:Keyword arguments {'add_space_before_punct_symbol': True} not recognized. | {
"login": "songproducer",
"id": 597346,
"node_id": "MDQ6VXNlcjU5NzM0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/597346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songproducer",
"html_url": "https://github.com/songproducer",
"followers_url": "https://api.github.com/users/songproducer/followers",
"following_url": "https://api.github.com/users/songproducer/following{/other_user}",
"gists_url": "https://api.github.com/users/songproducer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songproducer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songproducer/subscriptions",
"organizations_url": "https://api.github.com/users/songproducer/orgs",
"repos_url": "https://api.github.com/users/songproducer/repos",
"events_url": "https://api.github.com/users/songproducer/events{/privacy}",
"received_events_url": "https://api.github.com/users/songproducer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,593 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
ctrl
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
```
python run_generation.py \
--model_type ctrl \
--model_name ctrl --length=100 --temperature 0.2 --num_return_sequences=5 --p=0.8 --seed=17 --repetition_penalty=1.2
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
- `transformers` version: 3.0.1
- Platform: Darwin-19.5.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.2.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5524/comments | https://api.github.com/repos/huggingface/transformers/issues/5524/events | https://github.com/huggingface/transformers/issues/5524 | 651,041,941 | MDU6SXNzdWU2NTEwNDE5NDE= | 5,524 | How to fine-tune tinyBERT for question-asnwering | {
"login": "anirbansaha96",
"id": 52232270,
"node_id": "MDQ6VXNlcjUyMjMyMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/52232270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anirbansaha96",
"html_url": "https://github.com/anirbansaha96",
"followers_url": "https://api.github.com/users/anirbansaha96/followers",
"following_url": "https://api.github.com/users/anirbansaha96/following{/other_user}",
"gists_url": "https://api.github.com/users/anirbansaha96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anirbansaha96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anirbansaha96/subscriptions",
"organizations_url": "https://api.github.com/users/anirbansaha96/orgs",
"repos_url": "https://api.github.com/users/anirbansaha96/repos",
"events_url": "https://api.github.com/users/anirbansaha96/events{/privacy}",
"received_events_url": "https://api.github.com/users/anirbansaha96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
How do I fine-tune tiny-BERT for question answering on my custom dataset. I've a list of questions and corresponding answers in a csv format, How do I fine-tune tiny-BERT on this dataset.
[You can find it at StackOverFlow here](https://stackoverflow.com/questions/62710931/huggingface-transformers-model-for-legal-question-answering)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5523/comments | https://api.github.com/repos/huggingface/transformers/issues/5523/events | https://github.com/huggingface/transformers/pull/5523 | 651,041,795 | MDExOlB1bGxSZXF1ZXN0NDQ0Mzg2ODg1 | 5,523 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=h1) Report\n> Merging [#5523](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5523 +/- ##\n==========================================\n+ Coverage 77.83% 77.85% +0.02% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 19175 19180 +5 \n+ Misses 5459 5454 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5523/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=footer). Last update [58cca47...0780d53](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5523/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5523",
"html_url": "https://github.com/huggingface/transformers/pull/5523",
"diff_url": "https://github.com/huggingface/transformers/pull/5523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5523.patch",
"merged_at": 1594118023000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5522/comments | https://api.github.com/repos/huggingface/transformers/issues/5522/events | https://github.com/huggingface/transformers/pull/5522 | 651,039,176 | MDExOlB1bGxSZXF1ZXN0NDQ0Mzg0OTcy | 5,522 | Added data collator for permutation (XLNet) language modeling and related calls | {
"login": "shngt",
"id": 20009551,
"node_id": "MDQ6VXNlcjIwMDA5NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shngt",
"html_url": "https://github.com/shngt",
"followers_url": "https://api.github.com/users/shngt/followers",
"following_url": "https://api.github.com/users/shngt/following{/other_user}",
"gists_url": "https://api.github.com/users/shngt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shngt/subscriptions",
"organizations_url": "https://api.github.com/users/shngt/orgs",
"repos_url": "https://api.github.com/users/shngt/repos",
"events_url": "https://api.github.com/users/shngt/events{/privacy}",
"received_events_url": "https://api.github.com/users/shngt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=h1) Report\n> Merging [#5522](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `2.04%`.\n> The diff coverage is `16.32%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5522 +/- ##\n==========================================\n- Coverage 77.83% 75.79% -2.05% \n==========================================\n Files 141 141 \n Lines 24634 24682 +48 \n==========================================\n- Hits 19175 18708 -467 \n- Misses 5459 5974 +515 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <14.58%> (-78.60%)` | :arrow_down: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `16.66% <0.00%> (-21.30%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.00% <0.00%> (-17.10%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=footer). Last update [58cca47...a99871b](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten Please review",
"Hey @shngt - Thanks a mille for your PR. Pretraining for XLNet is not easy at all, so this is a super valuable PR :-) \r\n\r\nJust re-read the paper and looked into the official xlnet code -> I think I finally understand now how it works exactly :D \r\nBecause it's not at all straight-forward to understand what happens there, it would be great if we can try to make the code as understandable as possible -> that's why I left so many comments mainly on variable names.\r\n\r\nAlso, we should add a test to `test_trainer.py` to next to the other data collators to test this one :-) \r\n\r\nLooking forward to merge this soon :-) ",
"@patrickvonplaten I've added more detailed comments and fixed the naming issues. I also added a few tests based on what I could see in `tests/test_trainer.py`. Let me know what you think :)",
"Awesome job @shngt ! This looks very clean now :-) Good to merge IMO.\r\nPinging @julien-c @LysandreJik @sgugger. ",
"Pinging @thomwolf and @julien-c for notification and merging - Great job @shngt!",
"Impressive work, I was following this from the beginning.\r\n\r\nCan we continue training XLNet now on domain-specific corpus ? and when will this merge be available to use? \r\n\r\nThanks",
"PR should be ready for use as of now, if you install from master :-) ",
"Great work @shngt, this is very nice.",
"Hi, I raised an issue here #22435 because I have a question about this `DataCollatorForPermutationLanguageModeling`. Looking forward to your guys' response. Thank you!"
] | 1,593 | 1,680 | 1,594 | CONTRIBUTOR | null | Added `DataCollatorForPermutationLanguageModeling` in `data/data_collator.py` to return necessary inputs (applies masking and generates revelant tensors input_ids, perm_mask, target_mask and labels as per https://github.com/zihangdai/xlnet/blob/master/data_utils.py) for language modeling training with XLNetLMHeadModel. Also added related arguments, logic and calls in `examples/language-modeling/run_language_modeling.py`. Defined a separate `--plm_probability` flag for its use.
Also looked into CTRL - it uses a CLM loss just like GPT and GPT-2, so should work out of the box with this script (provided `past` is taken care of similar to `mems` for XLNet). Added a few words in the comments to reflect this.
Changed calls and imports appropriately.
Resolves: #4739, #2008 (partially) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5522/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5522",
"html_url": "https://github.com/huggingface/transformers/pull/5522",
"diff_url": "https://github.com/huggingface/transformers/pull/5522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5522.patch",
"merged_at": 1594109858000
} |
https://api.github.com/repos/huggingface/transformers/issues/5521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5521/comments | https://api.github.com/repos/huggingface/transformers/issues/5521/events | https://github.com/huggingface/transformers/issues/5521 | 650,988,699 | MDU6SXNzdWU2NTA5ODg2OTk= | 5,521 | What should i do if I want a model class similar to BertForSequenceClassification? | {
"login": "cloudygoose",
"id": 1544039,
"node_id": "MDQ6VXNlcjE1NDQwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1544039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cloudygoose",
"html_url": "https://github.com/cloudygoose",
"followers_url": "https://api.github.com/users/cloudygoose/followers",
"following_url": "https://api.github.com/users/cloudygoose/following{/other_user}",
"gists_url": "https://api.github.com/users/cloudygoose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cloudygoose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cloudygoose/subscriptions",
"organizations_url": "https://api.github.com/users/cloudygoose/orgs",
"repos_url": "https://api.github.com/users/cloudygoose/repos",
"events_url": "https://api.github.com/users/cloudygoose/events{/privacy}",
"received_events_url": "https://api.github.com/users/cloudygoose/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @cloudygoose , \r\nTake a look at the code of `BertForSequenceClassification` https://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertForSequenceClassification,\r\n\r\nAll it does is, pass the inputs through `BertModel`, which returns the pooled output of bert and the applies the classifier on top it.\r\n\r\nYou can follow the same process, create `YourSequenceClassification` class by sub-classing `BertPreTrainedModel`,\r\ninit your `BertModel` and the additional linear heads and final classfication layer and do forward as \r\n`BertModel -> additional linear heads -> classifier`\r\n\r\nYou can just take the `BertForSequenceClassification` as is and additional layers before the `classifier` \r\n\r\nHope this helps",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
Hi!
For my task I want to implement a class similar to BertForSequenceClassification, but a little different, for example, i may want multiple linear head on top of the bert encoding. Call it BertForMySequenceClassification,
But I still want to use the pretrained bert parameters, and hopefully do easy load/save.
For example, i hope i can BertForSequenceClassification.from_pretrianed(some_path)
Any suggestion or help are much appreciated!
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5521/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.