url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5120/comments | https://api.github.com/repos/huggingface/transformers/issues/5120/events | https://github.com/huggingface/transformers/issues/5120 | 641,487,035 | MDU6SXNzdWU2NDE0ODcwMzU= | 5,120 | AutoTokenizer.from_pretrained('facebook/mbart-large-en-ro') fails | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | It is not in TOKENIZER_MAPPING because it has same config as Bart.
Need to make separate MBartConfig.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5120/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5120/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5119/comments | https://api.github.com/repos/huggingface/transformers/issues/5119/events | https://github.com/huggingface/transformers/pull/5119 | 641,478,472 | MDExOlB1bGxSZXF1ZXN0NDM2NzA0MTYx | 5,119 | [cleanup] remove redundant code in SummarizationDataset | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=h1) Report\n> Merging [#5119](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5119 +/- ##\n==========================================\n- Coverage 77.28% 77.28% -0.01% \n==========================================\n Files 133 133 \n Lines 22134 22134 \n==========================================\n- Hits 17107 17106 -1 \n- Misses 5027 5028 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=footer). Last update [355954f...09ff62a](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5119",
"html_url": "https://github.com/huggingface/transformers/pull/5119",
"diff_url": "https://github.com/huggingface/transformers/pull/5119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5119.patch",
"merged_at": 1592526889000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5118/comments | https://api.github.com/repos/huggingface/transformers/issues/5118/events | https://github.com/huggingface/transformers/issues/5118 | 641,444,893 | MDU6SXNzdWU2NDE0NDQ4OTM= | 5,118 | UnboundLocalError: local variable 'next_tokens' referenced before assignment when using Generate() | {
"login": "chrisdoyleIE",
"id": 44365591,
"node_id": "MDQ6VXNlcjQ0MzY1NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44365591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisdoyleIE",
"html_url": "https://github.com/chrisdoyleIE",
"followers_url": "https://api.github.com/users/chrisdoyleIE/followers",
"following_url": "https://api.github.com/users/chrisdoyleIE/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisdoyleIE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisdoyleIE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisdoyleIE/subscriptions",
"organizations_url": "https://api.github.com/users/chrisdoyleIE/orgs",
"repos_url": "https://api.github.com/users/chrisdoyleIE/repos",
"events_url": "https://api.github.com/users/chrisdoyleIE/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisdoyleIE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten, do you want to take a look?",
"Hey @chrisdoyleIE ,\r\n\r\nUsing your code example above (I corrected some typos and missing imports), I am not able to reproduce the error. If a longer input is need to produce this error, please provide all necessary code to reproduce this error. Ideally, I should be able to copy paste the code into a console and get the same error as you :-)",
"Hey @patrickvonplaten , \r\n\r\nI'll do some digging and see if I can't reproduce it myself such that it's easily paste-able and then I can share this code (I currently have a few custom packages calling eachother which is hairier than i'd like and not trivial to insert into an issue).",
"I found a similar error when doing summarization, and just wanted to follow up on this. I have been stuck on this for a little bit now and I just wanted to check if there was a simple user-end solution to this, maybe incorrect arguments, etc.\r\nThis is a simplified notebook with the error: https://colab.research.google.com/drive/1Fj74x2NDJbhsty-oXzfhOCO185T80zw3?usp=sharing",
"Thanks for the notebook @MathewPerez! Will checkit now ",
"Awesome, I can reproduce the error - will look at a fix now :-) ",
"The problem is that `max_length` is not bigger than `cur_len` so that model will not produce any text. \r\nThis will fix the problem:\r\n\r\n```python\r\noutputs = model.generate(input_ids=input_ids, num_beams=3, max_length=75)\r\n```"
] | 1,592 | 1,618 | 1,593 | NONE | null | # 🐛 Bug
I have pre-trained GPT2 on a summarisation dataset such that summarisation is a language modelling task i.e. input = concat( padded_to_max_len(body , "TL;DR:" , summary)).
For some reason, this error occurs when I try to generate via beam search using GPT2's with language modelling head. Here is my code:
```python
from transformers import AutoTokenizer, GPT2LMHeadModel
import torch
# define tokenizer
tokenizer_kwargs = {"bos_token": "<|startoftext|>", "eos_token": "<|endoftext|>", "pad_token": "<|pad|>"}
tokenizer = AutoTokenizer.from_pretrained("gpt2", **tokenizer_kwargs)
# define model
model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
# define input
input_ids = torch.tensor(tokenizer.encode("some text that ends in TL;DR:")).unsqueeze(0)
# attempt to generate
y_pred_tensor = model.generate(input_ids=input_ids,
num_beams=5,
early_stopping=True,
no_repeat_ngram_size=2,
max_length=100
)
```
```bash
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/Scribbl/summarizer/models.py", line 147, in summarize
y_pred_tensor = self.model.generate(input_ids=input_tensor,
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1100, in generate
output = self._generate_beam_search(
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1499, in _generate_beam_search
(token_id % vocab_size).item() is not eos_token_id for token_id in next_tokens[batch_idx]
UnboundLocalError: local variable 'next_tokens' referenced before assignment
```
Model I am using: ```GPT2LMHeadModel```
Language I am using the model on (English, Chinese ...): ```en```
Rather strangely, it works when ```max_length = 1024```, but not with smaller values.
## To reproduce
Running on CPU
python==3.8, transformers==2.11.0, torch==1.5.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5118/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5117/comments | https://api.github.com/repos/huggingface/transformers/issues/5117/events | https://github.com/huggingface/transformers/issues/5117 | 641,377,781 | MDU6SXNzdWU2NDEzNzc3ODE= | 5,117 | An Implementation of ERNIE For Language Understanding (including Pre-training models and Fine-tuning tools) | {
"login": "manhlab",
"id": 47383746,
"node_id": "MDQ6VXNlcjQ3MzgzNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/47383746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manhlab",
"html_url": "https://github.com/manhlab",
"followers_url": "https://api.github.com/users/manhlab/followers",
"following_url": "https://api.github.com/users/manhlab/following{/other_user}",
"gists_url": "https://api.github.com/users/manhlab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manhlab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manhlab/subscriptions",
"organizations_url": "https://api.github.com/users/manhlab/orgs",
"repos_url": "https://api.github.com/users/manhlab/repos",
"events_url": "https://api.github.com/users/manhlab/events{/privacy}",
"received_events_url": "https://api.github.com/users/manhlab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Can we update to add this into transformers",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,597 | 1,597 | NONE | null | # 🌟 New model addition
ERNIE 2.0 is a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning. ERNIE 2.0 builds a strong basic for nearly every NLP tasks: Text Classification, Ranking, NER, machine reading comprehension, text genration and so on.
## Model description
<!-- Important information -->
https://github.com/PaddlePaddle/ERNIE
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5117/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5116/comments | https://api.github.com/repos/huggingface/transformers/issues/5116/events | https://github.com/huggingface/transformers/pull/5116 | 641,367,749 | MDExOlB1bGxSZXF1ZXN0NDM2NjA4NTEw | 5,116 | support local_files_only option for tf models | {
"login": "ogarin",
"id": 24646300,
"node_id": "MDQ6VXNlcjI0NjQ2MzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/24646300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ogarin",
"html_url": "https://github.com/ogarin",
"followers_url": "https://api.github.com/users/ogarin/followers",
"following_url": "https://api.github.com/users/ogarin/following{/other_user}",
"gists_url": "https://api.github.com/users/ogarin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ogarin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ogarin/subscriptions",
"organizations_url": "https://api.github.com/users/ogarin/orgs",
"repos_url": "https://api.github.com/users/ogarin/repos",
"events_url": "https://api.github.com/users/ogarin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ogarin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=h1) Report\n> Merging [#5116](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5116 +/- ##\n=======================================\n Coverage 77.28% 77.29% \n=======================================\n Files 133 133 \n Lines 22134 22135 +1 \n=======================================\n+ Hits 17107 17109 +2 \n+ Misses 5027 5026 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.44% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=footer). Last update [355954f...fa3e6dc](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5116",
"html_url": "https://github.com/huggingface/transformers/pull/5116",
"diff_url": "https://github.com/huggingface/transformers/pull/5116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5116.patch",
"merged_at": 1592502426000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5115/comments | https://api.github.com/repos/huggingface/transformers/issues/5115/events | https://github.com/huggingface/transformers/pull/5115 | 641,361,386 | MDExOlB1bGxSZXF1ZXN0NDM2NjAyODg1 | 5,115 | [cleanup] generate_beam_search comments | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=h1) Report\n> Merging [#5115](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5115 +/- ##\n==========================================\n+ Coverage 77.28% 77.30% +0.01% \n==========================================\n Files 133 133 \n Lines 22134 22130 -4 \n==========================================\n+ Hits 17107 17108 +1 \n+ Misses 5027 5022 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5115/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.85% <100.00%> (+0.42%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5115/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.04% <100.00%> (+0.09%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5115/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=footer). Last update [355954f...5d77afc](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great!"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This PR does two small things:
1) try to make `generate_beam_search`'s inline comments work with the code, and fix some typos.
2) make the `cur_len` argument to `BeamHypothesis.is_done` mandatory, since it is always specified. This simplifies the logic a tiny bit.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5115",
"html_url": "https://github.com/huggingface/transformers/pull/5115",
"diff_url": "https://github.com/huggingface/transformers/pull/5115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5115.patch",
"merged_at": 1592512224000
} |
https://api.github.com/repos/huggingface/transformers/issues/5114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5114/comments | https://api.github.com/repos/huggingface/transformers/issues/5114/events | https://github.com/huggingface/transformers/issues/5114 | 641,306,991 | MDU6SXNzdWU2NDEzMDY5OTE= | 5,114 | data_collator.py does not allow NoneType labels for test set predictions on Glue | {
"login": "rsanjaykamath",
"id": 18527321,
"node_id": "MDQ6VXNlcjE4NTI3MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsanjaykamath",
"html_url": "https://github.com/rsanjaykamath",
"followers_url": "https://api.github.com/users/rsanjaykamath/followers",
"following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}",
"gists_url": "https://api.github.com/users/rsanjaykamath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsanjaykamath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsanjaykamath/subscriptions",
"organizations_url": "https://api.github.com/users/rsanjaykamath/orgs",
"repos_url": "https://api.github.com/users/rsanjaykamath/repos",
"events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsanjaykamath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distilbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open the example Colab for text-classification from here https://huggingface.co/transformers/examples.html
2. Try to run the prediction function found in run_glue.py to predict on the official Glue test set.
3. The error is as shown below.
Earlier, this has worked with the exact same program, ever since an update recently, this error shows up.
`TypeError Traceback (most recent call last)
<ipython-input-16-9eecdd4d48b1> in <module>()
2 output_mode = "classification"
3
----> 4 predictions = trainer.predict(test_dataset=test_dataset).predictions
5 if output_mode == "classification":
6 predictions = np.argmax(predictions, axis=1)
7 frames
/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in default_data_collator(features)
45 if "label" in first:
46 dtype = torch.long if type(first["label"]) is int else torch.float
---> 47 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
48 elif "label_ids" in first:
49 if isinstance(first["label_ids"], torch.Tensor):
TypeError: must be real number, not NoneType`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The error can be seen in the Colab notebook here https://colab.research.google.com/drive/1H_92qdsOOql2hS210qNrfMEEMRcAoHD_?usp=sharing
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Colab
- Python version: NA
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5113/comments | https://api.github.com/repos/huggingface/transformers/issues/5113/events | https://github.com/huggingface/transformers/issues/5113 | 641,303,164 | MDU6SXNzdWU2NDEzMDMxNjQ= | 5,113 | GPU out of memory with Reformer enwik8 model | {
"login": "erickrf",
"id": 294483,
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erickrf",
"html_url": "https://github.com/erickrf",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"repos_url": "https://api.github.com/users/erickrf/repos",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Did you try to train with half precision using the apex/amp package?",
"No, I didn't. Anyway, this was only evaluating the pretrained model.",
"I now have installed nvidia apex and tried training a new model with fp16. It ran out of memory after around 10% of the evaluation loop. I didn't expect it, since the model shouldn't need more memory in the middle of evaluation, with all batches having the full sequence length (in my case, 16k).\r\n\r\nIs there maybe something to optimize GPU memory usage?",
"Hmm, lemme check...\r\nWhat is the sequence length and batch size you use excatly? Also, you can reduce `num_hashes` to save memory.",
"I'm using this training setup with the `Trainer` from `huggingface`:\r\n```\r\n axial_pos_emb_dim = 128, 384\r\n hidden_dim = sum(axial_pos_emb_dim)\r\n\r\n axial_pos_max = 64, 256 # the product of this is the maximum length\r\n max_length = axial_pos_max[0] * axial_pos_max[1]\r\n\r\n hidden_dropout = 0.2\r\n attn_dropout = 0.1\r\n ff_dim = 2 * hidden_dim\r\n num_heads = 8\r\n dim_per_head = hidden_dim // num_heads\r\n num_layers = 6\r\n layers = ['local', 'lsh'] * (num_layers // 2)\r\n chunk_size = 0\r\n bucket_size = 64\r\n num_hashes = 2\r\n vocab_size = 258\r\n\r\n config = ReformerConfig(\r\n dim_per_head, layers, chunk_size_feed_forward=chunk_size,\r\n axial_pos_embds_dim=axial_pos_emb_dim,\r\n axial_pos_shape=axial_pos_max,\r\n max_position_embeddings=max_length,\r\n eos_token_id=1, feed_forward_size=ff_dim,\r\n hidden_dropout_prob=hidden_dropout, hidden_size=hidden_dim,\r\n lsh_attention_probs_dropout_prob=attn_dropout,\r\n local_attention_probs_dropout_prob=attn_dropout,\r\n num_attention_heads=num_heads, num_buckets=None,\r\n pad_token_id=0, lsh_attn_chunk_length=bucket_size,\r\n num_hashes=num_hashes, vocab_size=vocab_size)\r\n model = ReformerModelWithLMHead(config)\r\n\r\n training_args = TrainingArguments(\r\n 'model', do_train=True, do_eval=True,\r\n do_predict=False, evaluate_during_training=True,\r\n gradient_accumulation_steps=1,\r\n learning_rate=0.001,\r\n logging_dir='model/tensorboard',\r\n logging_steps=5,\r\n save_steps=1000,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n fp16=True)\r\n```\r\n\r\nUnless I'm doing something wrong, it's a batch size of 1 (both for training and evaluating) and a sequence length of 16384 (64 * 256).",
"Ok, so I found that the main culprit was that the `Trainer` was storing all model predictions in GPU memory during evaluation at https://github.com/huggingface/transformers/blob/c01480bba3b2f0bd8516679476235f4701c21b3b/src/transformers/trainer.py#L775\r\n\r\nPassing `prediction_loss_only=False` avoided that. By the way, I believe this should be the default value in the `Trainer`, and that the `cat` operation could use cpu tensors, in case the validation dataset is big. ",
"Out of curiosity, how big is your validation dataset/how large is the in-memory size of `preds` in that prediction loop?",
"@julien-c this happened with the enwik8 test set, 5M characters.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,598 | CONTRIBUTOR | null | # ❓ Questions & Help
I'm trying to run the pretrained model `google/reformer-enwik8` but I'm getting CUDA out of memory errors unless I limit the sequences to one-fourth of the model capacity (~16k instead of the 65k).
This happens with a Titan Xp with 12GB RAM; I expected all the tricks of the Reformer to make the model with the original sequence size fit.
The code I'm running:
```
model = ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8')
model.cuda()
config = model.config
max_len = config.max_position_embeddings
dataset = Enwik8Dataset(
path, max_len, pad_id=config.pad_token_id,
eos_id=config.eos_token_id)
loader = DataLoader(dataset, batch_size=1, shuffle=False)
acc_loss = 0
for batch in loader:
with torch.no_grad():
batch_loss = model(input_ids=batch, labels=batch)[0]
acc_loss += batch_loss.mean().item()
acc_loss /= len(dataset)
```
The Enwik8Dataset inherits from Dataset and does the basic data preprocessing, I can post the code if necessary.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62373033/gpu-out-of-memory-with-enwik8-reformer-from-huggingface
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5113/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5112/comments | https://api.github.com/repos/huggingface/transformers/issues/5112/events | https://github.com/huggingface/transformers/issues/5112 | 641,293,709 | MDU6SXNzdWU2NDEyOTM3MDk= | 5,112 | Strange exception | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The notebook is not using the collation function from `transformers` but the `T2TDataCollator` it defines. Apart from stopping subclassing `DataCollator` (as it's not a class anymore on master) this shouldn't be a problem.",
"Hi, @antoniomastro1996 , if you are using transformers from master branch then there are few changes.\r\nAs @sgugger said, `DataCollator` is not a class anymore, so just don't subclass. Also as datacollator is now a `callable`, rename the `batch_collate` method to `__call__`.\r\n\r\nOr you can use version 2.11.0 and it'll work as it is. I'll update the notebook once the new version is released. Let me know if you run into any other issues.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
Hi everybody,
I'm keeping experiment with the following colab:
https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=X9_Go99fvW-z
however I got a new error after the fine-tuning starts:
Exception in thread Thread-20:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 141, in _loader_worker
_, data = next(data_iter)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 354, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 394, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "<ipython-input-6-cc30f67d4a07>", line 34, in collate_batch
input_ids = torch.stack([example['input_ids'] for example in batch])
File "<ipython-input-6-cc30f67d4a07>", line 34, in <listcomp>
input_ids = torch.stack([example['input_ids'] for example in batch])
TypeError: string indices must be integers
Is this due to the new changes introduced yesterday for the DataCollate ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5111/comments | https://api.github.com/repos/huggingface/transformers/issues/5111/events | https://github.com/huggingface/transformers/issues/5111 | 641,287,244 | MDU6SXNzdWU2NDEyODcyNDQ= | 5,111 | [Marian] Run predictions on GPU? RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select` | {
"login": "mykhailoslukvin",
"id": 66723753,
"node_id": "MDQ6VXNlcjY2NzIzNzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/66723753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mykhailoslukvin",
"html_url": "https://github.com/mykhailoslukvin",
"followers_url": "https://api.github.com/users/mykhailoslukvin/followers",
"following_url": "https://api.github.com/users/mykhailoslukvin/following{/other_user}",
"gists_url": "https://api.github.com/users/mykhailoslukvin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mykhailoslukvin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mykhailoslukvin/subscriptions",
"organizations_url": "https://api.github.com/users/mykhailoslukvin/orgs",
"repos_url": "https://api.github.com/users/mykhailoslukvin/repos",
"events_url": "https://api.github.com/users/mykhailoslukvin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mykhailoslukvin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is your `batch` on the correct device too?",
"Thank you, I moved `batch` to the GPU and it works.\r\n`batch = tokenizer.prepare_translation_batch(sentences).to('cuda')`",
"More generally: `batch.to(model.device)`."
] | 1,592 | 1,617 | 1,592 | NONE | null | Is it possible to run the Marian NMT model predictions on GPU, should not it be faster? Once I try to move the model to GPU I get the error described below. Is there any quick fix to the code to achieve this?
```
torch.__version__
Out[15]: '1.5.0'
```
The code to reproduce:
```
from transformers import MarianMTModel, MarianTokenizer
model_name = 'Helsinki-NLP/opus-mt-fr-en'
model = MarianMTModel.from_pretrained(model_name).cuda()
tokenizer = MarianTokenizer.from_pretrained(model_name)
```
Then I check the current device:
```
model.device
Out[33]: device(type='cuda', index=0)
```
Then I try to predict following the examples from documentation:
```
batch = tokenizer.prepare_translation_batch(list_of_sents)
gen = model.generate(**batch, num_beam=1, early_stopping = True, no_repeat_ngram_size = 3)
File "<ipython-input-9-54f44d46fb14>", line 1, in <module>
gen = model.generate(**batch, num_beam=1, early_stopping = True, no_repeat_ngram_size = 3)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1146, in generate
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/transformers/modeling_bart.py", line 292, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select`
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5111/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5110/comments | https://api.github.com/repos/huggingface/transformers/issues/5110/events | https://github.com/huggingface/transformers/pull/5110 | 641,278,345 | MDExOlB1bGxSZXF1ZXN0NDM2NTMyMzI4 | 5,110 | XLMForMultipleChoice | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=h1) Report\n> Merging [#5110](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5110 +/- ##\n==========================================\n+ Coverage 77.28% 77.32% +0.03% \n==========================================\n Files 133 133 \n Lines 22134 22164 +30 \n==========================================\n+ Hits 17107 17139 +32 \n+ Misses 5027 5025 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.16% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.76% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `90.23% <100.00%> (+0.98%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=footer). Last update [355954f...439c7d0](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"In the end this was added by #5614, so closing here."
] | 1,592 | 1,598 | 1,598 | MEMBER | null | This PR adds `XLMForMultipleChoice`. One of the missing models in this [project](https://github.com/huggingface/transformers/projects/17)
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5110",
"html_url": "https://github.com/huggingface/transformers/pull/5110",
"diff_url": "https://github.com/huggingface/transformers/pull/5110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5110.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5109/comments | https://api.github.com/repos/huggingface/transformers/issues/5109/events | https://github.com/huggingface/transformers/issues/5109 | 641,266,116 | MDU6SXNzdWU2NDEyNjYxMTY= | 5,109 | Flaky tests sometimes caused by S3 failures | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I can't replicate (I understand that's the point :)) \r\n\r\nThe traceback from your link is:\r\n\r\n```python\r\nOSError: Can't load weights for 'sshleifer/tiny-distilroberta-base'. Make sure that:\r\nE \r\nE - 'sshleifer/tiny-distilroberta-base' is a correct model identifier listed on 'https://huggingface.co/models'\r\nE \r\nE - or 'sshleifer/tiny-distilroberta-base' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\n```\r\n\r\nContents are fine, it seems:\r\n\r\n\r\n\r\n\r\n@julien-c have you seen similar intermittent S3 failures?",
"From what I experience, this happens not just for the same model, but for different ones. Very hard to traceback what is going on there IMO.",
"Yes, I had a similar circleci failure trying to get the bert tokenizer yesterday.\r\n",
"I have found a clue. I hypothesize that some models have user posted metadata that somehow ducks them up. \r\n\r\nI have never seen flaky bart-large failure, and there is no metadata:\r\n\r\n```bash\r\n\r\naws s3api head-object --bucket models.huggingface.co --key bert/facebook/bart-large/config.json\r\n\r\n=> bytes\t1264\tapplication/json\t\"faf4c3c00764dc47a3a10a63a004dcc3\"\tFri, 24 Apr 2020 15:58:48 GMT\tJhIFsOvvLtrLn0vJjGNN6ZhJGUlbXBEP\r\n\r\n```\r\n\r\nwhereas there are `Helsinki-NLP/opus-mt-en-ROMANCE` failures, and metadata:\r\n\r\n```bash\r\naws s3api head-object --bucket models.huggingface.co --key bert/Helsinki-NLP/opus-mt-en-ROMANCE/config.json\r\n\r\n=> bytes\t1113\ttext/plain\t\"13ca8d49ee7f02a26f935cb4a60e6557\"\tTue, 12 May 2020 22:39:10 GMT\tjAI94kJ_exk0tG6z0Yr.Rea4_j0g02Ih\r\nMETADATA\tatime:1589321950/ctime:1589321949/gid:1007/gname:shleifer/md5:13ca8d49ee7f02a26f935cb4a60e6557/mode:33188/mtime:1589321949/uid:1006/uname:shleifer\r\n```\r\n\r\n\r\n\r\nNow I need to figure out how to get rid of the metadata to test my hypothesis, if anyone knows how.\r\n\r\n**Update:**\r\n```bash\r\nbytes\t474\tapplication/json\t\"622babbd58848ec69a2433ba0c6edab3\"\tTue, 12 May 2020 01:26:15 GMT\tUKLWouonOr8duRlrHqH4.iNZEF5bmOQY\r\nMETADATA\tatime:1589246769/ctime:1589246726/gid:20/gname:staff/md5:622babbd58848ec69a2433ba0c6edab3/mode:33188/mtime:1589246726/uid:501/uname:shleifer\r\n```\r\nalso shows metadata, and there are flaky failures with that model.\r\n",
"Should be easy, I can guide you through the S3 doc if needed.\r\n\r\nI doubt that's the reason though. What about intermittent connectivity issues? They seem more likely.",
"It could certainly be connectivity. I manually deleted the metadata for all contents of `Helsinki-NLP/opus-mt-en-ROMANCE/` and `sshleifer/tiny-distilroberta-base`, so if there are more failures there we will know that my theory is wrong. I think the metadata is caused by using `s3cmd` instead of `awscli`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,599 | MEMBER | null | # 🐛 Bug
## Information
The test: `tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask` is flaky on circle ci: https://app.circleci.com/pipelines/github/huggingface/transformers/7734/workflows/7c51cfcf-5425-4172-aa15-ed677e37f7fc/jobs/49891/steps
## To reproduce
Not sure. It pops up from time to time on circle ci.
Need to investigate more
## Expected behavior
The test should not be flaky
## Environment info
On circle ci | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5109/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5109/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5108/comments | https://api.github.com/repos/huggingface/transformers/issues/5108/events | https://github.com/huggingface/transformers/pull/5108 | 641,255,717 | MDExOlB1bGxSZXF1ZXN0NDM2NTEzMzY3 | 5,108 | Create README.md | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=h1) Report\n> Merging [#5108](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5108 +/- ##\n==========================================\n- Coverage 77.28% 77.20% -0.09% \n==========================================\n Files 133 133 \n Lines 22134 22134 \n==========================================\n- Hits 17107 17088 -19 \n- Misses 5027 5046 +19 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.49% <0.00%> (-0.94%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=footer). Last update [355954f...6330b4e](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5108",
"html_url": "https://github.com/huggingface/transformers/pull/5108",
"diff_url": "https://github.com/huggingface/transformers/pull/5108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5108.patch",
"merged_at": 1592988352000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5107/comments | https://api.github.com/repos/huggingface/transformers/issues/5107/events | https://github.com/huggingface/transformers/pull/5107 | 641,217,204 | MDExOlB1bGxSZXF1ZXN0NDM2NDgxMjc4 | 5,107 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"> @julien-c check out that dataset meta tag is right\r\n\r\nIt seems everythigs is right"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | @julien-c check out that dataset meta tag is right | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5107/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5107",
"html_url": "https://github.com/huggingface/transformers/pull/5107",
"diff_url": "https://github.com/huggingface/transformers/pull/5107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5107.patch",
"merged_at": 1592848050000
} |
https://api.github.com/repos/huggingface/transformers/issues/5106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5106/comments | https://api.github.com/repos/huggingface/transformers/issues/5106/events | https://github.com/huggingface/transformers/issues/5106 | 641,211,104 | MDU6SXNzdWU2NDEyMTExMDQ= | 5,106 | How can I initialize RobertaForSequenceClassification empty? | {
"login": "raj5287",
"id": 11444890,
"node_id": "MDQ6VXNlcjExNDQ0ODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/11444890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raj5287",
"html_url": "https://github.com/raj5287",
"followers_url": "https://api.github.com/users/raj5287/followers",
"following_url": "https://api.github.com/users/raj5287/following{/other_user}",
"gists_url": "https://api.github.com/users/raj5287/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raj5287/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raj5287/subscriptions",
"organizations_url": "https://api.github.com/users/raj5287/orgs",
"repos_url": "https://api.github.com/users/raj5287/repos",
"events_url": "https://api.github.com/users/raj5287/events{/privacy}",
"received_events_url": "https://api.github.com/users/raj5287/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! If you want to initialize a new `RobertaForSequenceClassification` model you can do so as such:\r\n\r\n```py\r\nfrom transformers import RobertaForSequenceClassification, RobertaConfig\r\n\r\nconfig = RobertaConfig(\r\n # put the args you need, like the vocab size\r\n vocab_size=100\r\n)\r\n\r\nmodel = RobertaForSequenceClassification(config)\r\n```",
"I had the same problem when i try to finetune RoBERTa model with my own tokenizer as @raj5287 did. I have already updated all packages (pytorch, transformers)\r\n\r\nCode:\r\n\r\n> tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_path)\r\n> model = RobertaForMaskedLM.from_pretrained(self.config)\r\n> dataset = LineByLineTextDataset(\r\n> tokenizer=tokenizer,\r\n> file_path=input_text_path,\r\n> block_size=512,\r\n> )\r\n> \r\n> data_collator = DataCollatorForLanguageModeling(\r\n> tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n> )\r\n> training_args = TrainingArguments(\r\n> output_dir=output_path,\r\n> overwrite_output_dir=True,\r\n> num_train_epochs=3,\r\n> per_gpu_train_batch_size=8,\r\n> save_steps=50_000,\r\n> save_total_limit=2,\r\n> )\r\n> \r\n> trainer = Trainer(\r\n> model=model,\r\n> args=training_args,\r\n> data_collator=data_collator,\r\n> train_dataset=dataset,\r\n> prediction_loss_only=True\r\n> )\r\n> trainer.train()\r\n> trainer.save_model(output_path)\r\n> \r\n\r\nOutput:\r\n\r\nFile \"Fine_tuning_tokenizer.py\", line 106, in <module>\r\n main(sys.argv[1:])\r\n File \"Fine_tuning_tokenizer.py\", line 94, in main\r\n modelRoberta.fine_tune_LM(inputfile, outputDir,tokenizer_path=outputDir)\r\n File \"/sentiment-embeddings/projeto-modularizar/src/uff/ic/mell/sentimentembedding/modelos/modelo_roberta.py\", line 58, in fine_tune_LM\r\n trainer.train()\r\n File \"/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 549, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 762, in training_step\r\n return loss.item()\r\nRuntimeError: CUDA error: an illegal memory access was encountered",
"> I had the same problem when i try to finetune RoBERTa model with my own tokenizer as @raj5287 did. I have already updated all packages (pytorch, transformers)\r\n> \r\n> Code:\r\n> \r\n> > tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_path)\r\n> > model = RobertaForMaskedLM.from_pretrained(self.config)\r\n> > dataset = LineByLineTextDataset(\r\n> > tokenizer=tokenizer,\r\n> > file_path=input_text_path,\r\n> > block_size=512,\r\n> > )\r\n> > ```\r\n> > data_collator = DataCollatorForLanguageModeling(\r\n> > tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n> > )\r\n> > training_args = TrainingArguments(\r\n> > output_dir=output_path,\r\n> > overwrite_output_dir=True,\r\n> > num_train_epochs=3,\r\n> > per_gpu_train_batch_size=8,\r\n> > save_steps=50_000,\r\n> > save_total_limit=2,\r\n> > )\r\n> > \r\n> > trainer = Trainer(\r\n> > model=model,\r\n> > args=training_args,\r\n> > data_collator=data_collator,\r\n> > train_dataset=dataset,\r\n> > prediction_loss_only=True\r\n> > )\r\n> > trainer.train()\r\n> > trainer.save_model(output_path)\r\n> > ```\r\n> \r\n> Output:\r\n> \r\n> File \"Fine_tuning_tokenizer.py\", line 106, in \r\n> main(sys.argv[1:])\r\n> File \"Fine_tuning_tokenizer.py\", line 94, in main\r\n> modelRoberta.fine_tune_LM(inputfile, outputDir,tokenizer_path=outputDir)\r\n> File \"/sentiment-embeddings/projeto-modularizar/src/uff/ic/mell/sentimentembedding/modelos/modelo_roberta.py\", line 58, in fine_tune_LM\r\n> trainer.train()\r\n> File \"/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 549, in train\r\n> tr_loss += self.training_step(model, inputs)\r\n> File \"/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 762, in training_step\r\n> return loss.item()\r\n> RuntimeError: CUDA error: an illegal memory access was encountered\r\n\r\nIs it ok to use \"DataCollatorForLanguageModeling\" in a classification task? ",
"> I had the same problem when i try to finetune RoBERTa model with my own tokenizer as @raj5287 did. I have already updated all packages (pytorch, transformers)\r\n> \r\n> Code:\r\n> \r\n> > tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_path)\r\n> > model = RobertaForMaskedLM.from_pretrained(self.config)\r\n> > dataset = LineByLineTextDataset(\r\n> > tokenizer=tokenizer,\r\n> > file_path=input_text_path,\r\n> > block_size=512,\r\n> > )\r\n> > ```\r\n> > data_collator = DataCollatorForLanguageModeling(\r\n> > tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n> > )\r\n> > training_args = TrainingArguments(\r\n> > output_dir=output_path,\r\n> > overwrite_output_dir=True,\r\n> > num_train_epochs=3,\r\n> > per_gpu_train_batch_size=8,\r\n> > save_steps=50_000,\r\n> > save_total_limit=2,\r\n> > )\r\n> > \r\n> > trainer = Trainer(\r\n> > model=model,\r\n> > args=training_args,\r\n> > data_collator=data_collator,\r\n> > train_dataset=dataset,\r\n> > prediction_loss_only=True\r\n> > )\r\n> > trainer.train()\r\n> > trainer.save_model(output_path)\r\n> > ```\r\n> \r\n> Output:\r\n> \r\n> File \"Fine_tuning_tokenizer.py\", line 106, in\r\n> main(sys.argv[1:])\r\n> File \"Fine_tuning_tokenizer.py\", line 94, in main\r\n> modelRoberta.fine_tune_LM(inputfile, outputDir,tokenizer_path=outputDir)\r\n> File \"/sentiment-embeddings/projeto-modularizar/src/uff/ic/mell/sentimentembedding/modelos/modelo_roberta.py\", line 58, in fine_tune_LM\r\n> trainer.train()\r\n> File \"/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 549, in train\r\n> tr_loss += self.training_step(model, inputs)\r\n> File \"/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 762, in training_step\r\n> return loss.item()\r\n> RuntimeError: CUDA error: an illegal memory access was encountered\r\n\r\nHey @SergioBarretoJr ,I guess one of the causes is when the model encounters an unseen token. and no we cannot use DataCollatorForLanguageModeling for classification task @Chandrak1907. will have to loop through epochs for training and fine tuning.",
"Thks guys @raj5287 @Chandrak1907 ! I solved this problems! When I trained my BPE tokenizer, I defined my vocabulary size of 52.000 and I used RoBERTa config from hugging face that has a different vocabulary size. This caused this error.",
"@raj5287 , @SergioBarretoJr Can you share github link for your code? I could not find proper reference on huggingface.co. Thanks.",
"@SergioBarretoJr I am also facing the same issue, can you please help me?",
"> @SergioBarretoJr I am also facing the same issue, can you please help me?\r\n\r\n@amandalmia14 Have you ever checked if you set vocab size equal in tokenizer and RoBERTa.config file? \r\n\r\n def train_tokenizer(self,file_path,outDir):\r\n # Initialize a tokenizer\r\n tokenizer = ByteLevelBPETokenizer()\r\n # Customize training\r\n tokenizer.train(files=file_path, **vocab_size=52_000**, min_frequency=2, special_tokens=[\r\n \"<s>\",\r\n \"<pad>\",\r\n \"</s>\",\r\n \"<unk>\",\r\n \"<mask>\",\r\n ])\r\n self.tokenizer=tokenizer\r\n tokenizer.save(outDir)\r\n\r\n\r\nThis was how I solver my issue. Can I help with something more?",
"@SergioBarretoJr Thanks for the snippet, I am able to train the custom language model with the help of [this](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb). However, after creating the custom Roberta Model, I want to use it for creating a multi-label sentence classifier. \r\n\r\nCan you help me with that snippet of code? Thanks "
] | 1,592 | 1,599 | 1,592 | NONE | null | I am having my own custom dataset which is completely different from the one with roberta. Now using ByteLevelBPETokenizer, I am creating vocab.json and merges.txt . I am using these two to files to initialize RobertaTokenizerFast for encoding my corpus. Now I am training a RobertaForSequenceClassification for a binary classification problem. When I am initializing RobertaForSequenceClassification with any of the pre-trained models I am getting
`IndexError: index out of range in self` on the RAM while on GPU I am getting `RuntimeError: CUDA error: an illegal memory access was encountered` . I have followed other [issues ](https://github.com/pytorch/pytorch/issues/21819) but of no help. My understanding since I am creating my own vocabulary, some of the tokens are not in pre-trained model. So is there a way to initialize RobertaForSequenceClassification empty or to train this classification model for my dataset? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5106/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5105/comments | https://api.github.com/repos/huggingface/transformers/issues/5105/events | https://github.com/huggingface/transformers/issues/5105 | 641,195,120 | MDU6SXNzdWU2NDExOTUxMjA= | 5,105 | Is there a helper script to randomly mask spans of text for T5 pretraining? | {
"login": "abhisheknovoic",
"id": 62595485,
"node_id": "MDQ6VXNlcjYyNTk1NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheknovoic",
"html_url": "https://github.com/abhisheknovoic",
"followers_url": "https://api.github.com/users/abhisheknovoic/followers",
"following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheknovoic/orgs",
"repos_url": "https://api.github.com/users/abhisheknovoic/repos",
"events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheknovoic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not yet -> on the roadmap :-) ",
"any updates here?"
] | 1,592 | 1,671 | 1,592 | NONE | null | Hello team,
If I have an input text like
```The cute dog walks in the park```
Is there a library within HuggingFace that can output the following for me by masking random spans of text by (optional) taking an input masking probability?
```
masked input: The <extra_id_1> walks in <extra_id_2> park
target: <extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>
```
This is for preparing data for T5 within my own codebase. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5105/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5104/comments | https://api.github.com/repos/huggingface/transformers/issues/5104/events | https://github.com/huggingface/transformers/issues/5104 | 641,162,417 | MDU6SXNzdWU2NDExNjI0MTc= | 5,104 | Trainer evaluation doesn't return eval loss for question-answering. | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I solved it editing trainer.py, in this line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L765 . I added start_positions and end_positions to that \"possible labels names\" list, and it worked. Review the bug, please.",
"Thanks @alexvaca0 !\r\n\r\ncc @julien-c , @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> I solved it editing trainer.py, in this line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L765 . I added start_positions and end_positions to that \"possible labels names\" list, and it worked. Review the bug, please.\r\n\r\nI am having the same issue. Can you elaborate on how you solved this problem? I could not locate a possible label list and start/end positions?",
"#7191 should solve the problem described here.",
"@sgugger How can we calculate validation and evaluation loss for question answering finetuning pipeline (run_qa.py)"
] | 1,592 | 1,627 | 1,598 | NONE | null | As mentioned in this issue:
"""
Just a note that I tried `python run_squad_trainer.py --model_name_or_path bert-base-uncased --model_type bert --data_dir squad --output_dir /tmp/debug_squad/ --overwrite_output_dir --do_train --do_eval --evaluate_during_training --logging_steps 100`.
For some reason I don't get any evaluation metric during training (I was expecting `loss` or `eval_loss`).
_Originally posted by @borisdayma in https://github.com/huggingface/transformers/pull/4829#issuecomment-644338712_
"""
I'm facing the same problem. I'm trying to train with Trainer class over a QA dataset different from SQUAD. Everything works fine, the model learns based on train loss. However, I haven't been able to get the eval loss. I hope these pieces of code show how I'm configuring Trainer. Can somebody tell me if I'm doing something wrong?
```
training_args = TrainingArguments(
output_dir="./models/prueba_2",
per_gpu_train_batch_size=16,
per_gpu_eval_batch_size=32,
num_train_epochs=10,
logging_steps=10,
save_steps=25,
do_eval=True,
evaluate_during_training=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=collator,
prediction_loss_only=True,
compute_metrics = EvalPrediction
)
```
I also tried without EvalPrediction in compute_metrics.
Thanks in advance !
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5104/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5103/comments | https://api.github.com/repos/huggingface/transformers/issues/5103/events | https://github.com/huggingface/transformers/pull/5103 | 641,130,420 | MDExOlB1bGxSZXF1ZXN0NDM2NDA4Nzk4 | 5,103 | Tokenizers API developments | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=h1) Report\n> Merging [#5103](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/417e492f1e832c0b93512600d3385aa4c8a887c9&el=desc) will **decrease** coverage by `0.89%`.\n> The diff coverage is `87.26%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5103 +/- ##\n==========================================\n- Coverage 77.98% 77.09% -0.90% \n==========================================\n Files 138 138 \n Lines 23786 23836 +50 \n==========================================\n- Hits 18550 18376 -174 \n- Misses 5236 5460 +224 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.48% <78.78%> (-3.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `91.97% <85.71%> (-0.63%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.86% <91.93%> (-0.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <100.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.12% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=footer). Last update [417e492...ca291b5](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | Some developments on the tokenizer's API to make it more flexible and easy to use for dynamic/uniform size batching.
- add `return_lengths` methods to both slow and fast tokenizer to have a column with input lengths (useful to sort them afterward)
- make `tokenizer.pad()` accept a list of dicts so it can be used as a `collate_fn` in a PyTorch data loader.
- safety check that all the `kwargs` inputs to encoding methods are recognized (avoid silent errors). raise a warning if not, not an error for now (reduce breaking change behaviors).
- standardized arguments of the encoding methods in unified names (`return_attention_masks` => `return_attention_mask` and `return_special_tokens_masks` => `return_special_tokens_mask`). This is a slight breaking change but even the maintainers of the lib were actually making the confusion as well (which usually resulted in silent error hidden behind the above mentioned generic use of kwargs in encoding errors).
- use the `tokenizers` library `AddedToken` class to control more finely how added (and special) tokens are tokenized (do we tokenize them inside words, do we strip left and/or right white spaces around them). This class gives a simpler and cleaner behavior for `RobertaTokenizer`.
- change the way special tokens can be added to the tokenizers. We now have more separated paths:
* giving a string at tokenizer initialization or setting an attribute (e.g. `tokenizer.mask_token`) update the relevant attribute in the tokenizer class but doesn't add tokens in the vocabulary. We do that because the initialization of the special tokens is usually done before the backend vocabulary is setup. This automatic and non explicit addition of tokens can also be error-prone and a source of silent-errors.
* the recommended way to add a token and use it as a special token is to use `add_special_tokens` which will store the attribute AND add the token to the vocabulary if needed and which returns the number of added tokens
* a new method `tokenizer.sanitize_special_tokens()` can be used to make sure all the special tokens are in the vocabulary and to automatically add them if it's not the case.
This PR should in particular fix cleanly the Roberta specific behavior discussed in:
- https://github.com/huggingface/transformers/pull/2778
- https://github.com/huggingface/transformers/issues/3788
Roberta now behave like GPT2, i.e. without a prefix token. It can still be used in the masked-fill pipeline since with can control the behavior of the mask token with the parameters of `AddedToken` and in particular keep a space after the mask token as required to use the pipeline correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5103/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5103/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5103",
"html_url": "https://github.com/huggingface/transformers/pull/5103",
"diff_url": "https://github.com/huggingface/transformers/pull/5103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5103.patch",
"merged_at": 1592912218000
} |
https://api.github.com/repos/huggingface/transformers/issues/5102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5102/comments | https://api.github.com/repos/huggingface/transformers/issues/5102/events | https://github.com/huggingface/transformers/issues/5102 | 641,070,807 | MDU6SXNzdWU2NDEwNzA4MDc= | 5,102 | Loading Fine Tuned BERT Sequence Model after Training | {
"login": "tsivaguru",
"id": 43160782,
"node_id": "MDQ6VXNlcjQzMTYwNzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43160782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsivaguru",
"html_url": "https://github.com/tsivaguru",
"followers_url": "https://api.github.com/users/tsivaguru/followers",
"following_url": "https://api.github.com/users/tsivaguru/following{/other_user}",
"gists_url": "https://api.github.com/users/tsivaguru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsivaguru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsivaguru/subscriptions",
"organizations_url": "https://api.github.com/users/tsivaguru/orgs",
"repos_url": "https://api.github.com/users/tsivaguru/repos",
"events_url": "https://api.github.com/users/tsivaguru/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsivaguru/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
Loading Fine Tuned BERT Sequence Model after Training
## Information
Hi All,
iam facing following issue while loading pretrained BERT Sequence model with my own data
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.out.weight", "module.out.bias".
Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm....
any idea about this error
Model I am using (Bert, XLNet ...):
BERT SequenceClassification
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Not able to load the Fined Tuned BERT Model from saved model directory
MODEL = BERTBaseUncased()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
MODEL = nn.DataParallel(MODEL)
MODEL.load_state_dict(torch.load(MODEL_PATH, map_location=device))
MODEL.to(DEVICE)
MODEL.eval()
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
my own task with my dataset
## To reproduce
Steps to reproduce the behavior:
1. Save the Fined Tuned BERT Sequence Classification Model
2. Create MODEL_PATH and config file Path from Saved Model Location
3. Run below code to test the Fined Model using Rest API to test the test data
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
import config
import torch
import flask
import time
from flask import Flask
from flask import request
#import functools
import torch.nn as nn
import joblib
from transformers import BertTokenizer,BertConfig,BertModel
app = Flask(__name__)
MODEL = None
MODEL_PATH = r'/content/model_save/pytorch_model.bin'
config_file = r'/content/model_save/config.json'
DEVICE = "cuda"
PREDICTION_DICT = dict()
memory = joblib.Memory("../input/", verbose=0)
class BERTBaseUncased(nn.Module):
def __init__(self):
super(BERTBaseUncased, self).__init__()
#self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH)
#config = BertConfig.from_pretrained("bert-base-uncased", output_hidden_states=True)
config = BertConfig(config_file)
#self.bert = BertModel(config)
self.bert = BertTokenizer.from_pretrained("bert-base-uncased", config=config)
#self.bert = BertTokenizer.from_pretrained(r"/content/model_save/pytorch_model.bin", do_lower_case=False,encoding="utf-8")
self.bert_drop = nn.Dropout(0.3)
self.out = nn.Linear(768, 1)
def forward(self, ids, mask, token_type_ids):
_, o2 = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids)
bo = self.bert_drop(o2)
output = self.out(bo)
return output
def predict_from_cache(sentence):
if sentence in PREDICTION_DICT:
return PREDICTION_DICT[sentence]
else:
result = sentence_prediction(sentence)
PREDICTION_DICT[sentence] = result
return result
@memory.cache
def sentence_prediction(sentence):
tokenizer = config.TOKENIZER
max_len = config.MAX_LEN
review = str(sentence)
review = " ".join(review.split())
inputs = tokenizer.encode_plus(
review, None, add_special_tokens=True, max_length=max_len
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
token_type_ids = inputs["token_type_ids"]
padding_length = max_len - len(ids)
ids = ids + ([0] * padding_length)
mask = mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
ids = torch.tensor(ids, dtype=torch.long).unsqueeze(0)
mask = torch.tensor(mask, dtype=torch.long).unsqueeze(0)
token_type_ids = torch.tensor(token_type_ids, dtype=torch.long).unsqueeze(0)
ids = ids.to(DEVICE, dtype=torch.long)
token_type_ids = token_type_ids.to(DEVICE, dtype=torch.long)
mask = mask.to(DEVICE, dtype=torch.long)
outputs = MODEL(ids=ids, mask=mask, token_type_ids=token_type_ids)
outputs = torch.sigmoid(outputs).cpu().detach().numpy()
return outputs[0][0]
@app.route("/predict")
def predict():
sentence = request.args.get("sentence")
start_time = time.time()
article_prediction = sentence_prediction(sentence)
response = {}
response["response"] = {
"sentence": str(article_prediction),
"time_taken": str(time.time() - start_time),
}
return flask.jsonify(response)
if __name__ == "__main__":
MODEL = BERTBaseUncased()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
MODEL = nn.DataParallel(MODEL)
MODEL.load_state_dict(torch.load(MODEL_PATH, map_location=device))
MODEL.to(DEVICE)
MODEL.eval()
app.run()
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: GPU
- Python version: 3.6
- PyTorch version (GPU?): 1.5.0+cu101
- Tensorflow version (GPU?): 2.x
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: Dont Know
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5102/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5101/comments | https://api.github.com/repos/huggingface/transformers/issues/5101/events | https://github.com/huggingface/transformers/issues/5101 | 641,060,842 | MDU6SXNzdWU2NDEwNjA4NDI= | 5,101 | OpenAIGPTDoubleHeadsModel Don't have the "labels" attributes as it is described in the documentation | {
"login": "ghsama",
"id": 20842504,
"node_id": "MDQ6VXNlcjIwODQyNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/20842504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghsama",
"html_url": "https://github.com/ghsama",
"followers_url": "https://api.github.com/users/ghsama/followers",
"following_url": "https://api.github.com/users/ghsama/following{/other_user}",
"gists_url": "https://api.github.com/users/ghsama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghsama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghsama/subscriptions",
"organizations_url": "https://api.github.com/users/ghsama/orgs",
"repos_url": "https://api.github.com/users/ghsama/repos",
"events_url": "https://api.github.com/users/ghsama/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghsama/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ghsama `lm_labels` is changed to `labels` after 2.11.0. So if you are using version <=2.11.0 use `lm_labels` and if you're using master branch then use `labels`",
"Yes ! i just figure it out . Thank you @patil-suraj :D "
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using OpenAIGPTDoubleHeadsModel:
Language I am using the model on English:
The problem arises when using:
* The official example scripts:
I'm using transformer version : '2.11.0'
In the documentation i found that i can pass a "labels" attribute to the OpenAIGPTDoubleHeadsModel Model to be used in the loss:
[Documentation](https://huggingface.co/transformers/model_doc/gpt.html#openaigptdoubleheadsmodel)
But when i pass it he didn't recognize it :
``model(input_ids=input_ids,` token_type_ids=token_type_ids, position_ids=None, head_mask=None, mc_token_ids = mc_token_ids, labels=lm_labels, mc_labels = mc_labels)``
``TypeError Traceback (most recent call last)
<ipython-input-67-cc8f7864d8a7> in <module>()
----> 1 model(input_ids=input_ids, token_type_ids=token_type_ids, position_ids=None, head_mask=None, mc_token_ids = mc_token_ids, labels=lm_labels, mc_labels = mc_labels)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'labels' ``
## Question :
Is passing the labels thorough GPT implimentation is depricated ? if yes is there a way to pass it ?
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5101/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5100/comments | https://api.github.com/repos/huggingface/transformers/issues/5100/events | https://github.com/huggingface/transformers/issues/5100 | 641,017,173 | MDU6SXNzdWU2NDEwMTcxNzM= | 5,100 | Update Conda Release | {
"login": "lukasfolle",
"id": 26490449,
"node_id": "MDQ6VXNlcjI2NDkwNDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/26490449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukasfolle",
"html_url": "https://github.com/lukasfolle",
"followers_url": "https://api.github.com/users/lukasfolle/followers",
"following_url": "https://api.github.com/users/lukasfolle/following{/other_user}",
"gists_url": "https://api.github.com/users/lukasfolle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukasfolle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukasfolle/subscriptions",
"organizations_url": "https://api.github.com/users/lukasfolle/orgs",
"repos_url": "https://api.github.com/users/lukasfolle/repos",
"events_url": "https://api.github.com/users/lukasfolle/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukasfolle/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This might provide more information: https://github.com/conda-forge/transformers-feedstock/issues/3\r\n\r\nIn short, the conda-forge update is blocked by `sentencepiece` not being available through conda.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The feedstock has now been updated for Linux and is available. Other OSes are still waiting on sentencepiece.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This should be fixed by #8073 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,610 | 1,610 | NONE | null | When trying to get started with transformers using some examples from the model cards, the installation with conda results in an outdated version (2.1.1).
As a result, the example cannot run. However, when using pip to install transformers the newest version (2.11) is correctly used.
I would suggest updating the conda release in order to avoid installing via pip. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5100/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5099/comments | https://api.github.com/repos/huggingface/transformers/issues/5099/events | https://github.com/huggingface/transformers/pull/5099 | 640,913,885 | MDExOlB1bGxSZXF1ZXN0NDM2MjMzNTYy | 5,099 | Fix TF WarmUp class | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=h1) Report\n> Merging [#5099](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efeb75b8054cc299698cf8bc09f395ada2660745&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5099 +/- ##\n==========================================\n+ Coverage 77.24% 77.29% +0.04% \n==========================================\n Files 133 133 \n Lines 22134 22134 \n==========================================\n+ Hits 17097 17108 +11 \n+ Misses 5037 5026 -11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+1.24%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=footer). Last update [efeb75b...d99033f](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @Colanim! Good catch!! Unfortunately it is a duplicate of #4940 :smile: and should be merged soon :)",
"Haha didn't see x)"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This commit fixes the TF WarmUp learning rate scheduler.
The LR shape was wrong due to warmup steps. See linked issue for more details.
Fix #5098 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5099/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5099",
"html_url": "https://github.com/huggingface/transformers/pull/5099",
"diff_url": "https://github.com/huggingface/transformers/pull/5099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5099.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5098/comments | https://api.github.com/repos/huggingface/transformers/issues/5098/events | https://github.com/huggingface/transformers/issues/5098 | 640,910,892 | MDU6SXNzdWU2NDA5MTA4OTI= | 5,098 | 🐛 [TF] `create_optimizer` wrong superposition of learning rate schedules | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🐛 Bug
When using `create_optimizer`, 2 learning rate schedulers are placed on top of each other (WarmUp and keras Polynomial Decay) :
https://github.com/huggingface/transformers/blob/efeb75b8054cc299698cf8bc09f395ada2660745/src/transformers/optimization_tf.py#L70-L80
But the step in Warmup scheduler is not updated to take into account the warmup steps, which lead to a wrong learning rate :

---
Expected learning rate shape :

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5098/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5097/comments | https://api.github.com/repos/huggingface/transformers/issues/5097/events | https://github.com/huggingface/transformers/issues/5097 | 640,908,566 | MDU6SXNzdWU2NDA5MDg1NjY= | 5,097 | Training the BERTSUM model | {
"login": "tahmedge",
"id": 15964236,
"node_id": "MDQ6VXNlcjE1OTY0MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/15964236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tahmedge",
"html_url": "https://github.com/tahmedge",
"followers_url": "https://api.github.com/users/tahmedge/followers",
"following_url": "https://api.github.com/users/tahmedge/following{/other_user}",
"gists_url": "https://api.github.com/users/tahmedge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tahmedge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tahmedge/subscriptions",
"organizations_url": "https://api.github.com/users/tahmedge/orgs",
"repos_url": "https://api.github.com/users/tahmedge/repos",
"events_url": "https://api.github.com/users/tahmedge/events{/privacy}",
"received_events_url": "https://api.github.com/users/tahmedge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, as seen with @sshleifer, you can train for summarization using the [summarization script](https://github.com/huggingface/transformers/tree/master/examples/summarization). The models supported right now are all BART variants and t5-small. More to come!"
] | 1,592 | 1,593 | 1,593 | NONE | null | Hi, I find that the script can only predict the summaries using the BERTSUM model. Is it possible to train the BERTSUM model using this script? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5097/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5096/comments | https://api.github.com/repos/huggingface/transformers/issues/5096/events | https://github.com/huggingface/transformers/issues/5096 | 640,903,149 | MDU6SXNzdWU2NDA5MDMxNDk= | 5,096 | Can I training a bart model from scratch by transformers? | {
"login": "ScottishFold007",
"id": 36957508,
"node_id": "MDQ6VXNlcjM2OTU3NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36957508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScottishFold007",
"html_url": "https://github.com/ScottishFold007",
"followers_url": "https://api.github.com/users/ScottishFold007/followers",
"following_url": "https://api.github.com/users/ScottishFold007/following{/other_user}",
"gists_url": "https://api.github.com/users/ScottishFold007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScottishFold007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScottishFold007/subscriptions",
"organizations_url": "https://api.github.com/users/ScottishFold007/orgs",
"repos_url": "https://api.github.com/users/ScottishFold007/repos",
"events_url": "https://api.github.com/users/ScottishFold007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScottishFold007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes",
"> Yes\r\n\r\nThat' s awesome!Can you give a code to show? I'm grateful!",
"So from the paper: https://arxiv.org/pdf/1910.13461.pdf, you can see that Bart is trained on denoising input sequences in almost any possible way. \r\n\r\nOne way could be for `BartForConditionalGeneration`:\r\n\r\n```python \r\nfrom transformers import BartTokenizer, BartForConditionalGeneration, BartConfig\r\n\r\ntok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\nmodel = BartForConditionalGeneration(BartConfig())\r\n\r\ninput_string = \"My dog is <mask> </s>\"\r\ndecoder_input_string = \"<s> My dog is cute\"\r\nlabels_string = \"My dog is cute </s>\"\r\n\r\ninput_ids = tok(input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\ndecoder_input_ids =tok(decoder_input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\nlabels = tok(labels_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\n \r\nloss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n```",
"Pinging @sshleifer to make sure I did not forget anything",
"> Pinging @sshleifer to make sure I did not forget anything\r\n\r\nActually, I was going to ask. how train a model from zero to one. For example, I want to train a Chinese bart model.",
"Here's a working example for this, including batching:\r\n\r\n```\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration, BartConfig\r\n\r\ntok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\nmodel = BartForConditionalGeneration(BartConfig())\r\n\r\ninput_batch = [\"My dog is <mask></s>\", \"It loves to play in the <mask></s>\"]\r\ndecoder_input_batch = [\"<s>My dog is cute\", \"<s>It loves to play in the park\"]\r\nlabels_batch = [\"My dog is cute</s>\", \"It loves to play in the park</s>\"]\r\n\r\ninput_ids = tok.batch_encode_plus(input_batch, add_special_tokens=False, return_tensors=\"pt\", padding=True).input_ids\r\ndecoder_input_ids = tok.batch_encode_plus(decoder_input_batch, add_special_tokens=False, return_tensors=\"pt\", padding=True).input_ids\r\nlabels = tok.batch_encode_plus(labels_batch, add_special_tokens=False, return_tensors=\"pt\", padding=True).input_ids\r\n\r\nloss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n```\r\n\r\n`>>>` `tensor(10.9981, device='cuda:0', grad_fn=<NllLossBackward>)`",
"> Here's a working example for this, including batching:\r\n> \r\n> ```\r\n> from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig\r\n> \r\n> tok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\n> model = BartForConditionalGeneration(BartConfig())\r\n> \r\n> input_batch = [\"My dog is <mask></s>\", \"It loves to play in the <mask></s>\"]\r\n> decoder_input_batch = [\"<s>My dog is cute\", \"<s>It loves to play in the park\"]\r\n> labels_batch = [\"My dog is cute</s>\", \"It loves to play in the park</s>\"]\r\n> \r\n> input_ids = tok.batch_encode_plus(input_batch, add_special_tokens=False, return_tensors=\"pt\", padding=True).input_ids\r\n> decoder_input_ids = tok.batch_encode_plus(decoder_input_batch, add_special_tokens=False, return_tensors=\"pt\", padding=True).input_ids\r\n> labels = tok.batch_encode_plus(labels_batch, add_special_tokens=False, return_tensors=\"pt\", padding=True).input_ids\r\n> \r\n> loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n> ```\r\n> \r\n> `>>>` `tensor(10.9981, device='cuda:0', grad_fn=<NllLossBackward>)`\r\n\r\ninput_batch = [\"My dog is <mask></s>\", \"It loves to play in the <mask></s>\"]\r\ndecoder_input_batch = [\"<s>My dog is cute\", \"<s>It loves to play in the park\"]\r\nlabels_batch = [\"My dog is cute</s>\", \"It loves to play in the park</s>\"]\r\n\r\n\r\nIf I have a text document, each line of a paragraph, how do I rewrite the data input on it? Thanks!",
"@tomhosking the paper indicates that it uses both sentence permutation (loss is propagated from all tokens instead of only masked tokens) and infilling (include only one mask token for multiple consecutive masks). would this be a correct input?\r\n\r\ninput_batch = [\"\\<s>It is \\<mask\\> retriever. My dog is \\<mask\\>\\</s>\", \"\\<s>There \\<mask\\> in SF. It loves to play in the \\<mask\\>\\</s>\"]\r\ndecoder_input_batch = [\"\\</s>\\<s>My dog is cute. It is a golden retriever\", \"\\</s>\\<s>It loves to play in the park. There are many parks in SF.\"]\r\nlabels_batch = [\"\\<s>My dog is cute. It is a golden retriever\\</s>\", \"\\<s>It loves to play in the park. There are many parks in SF.\\</s>\"]\r\n\r\n(Note: decoder_input_batch starts with \\</s>\\<s> due to shift_tokens_right #7961)",
"Sorry for the intrusion, but I think your values are almost correct @swethmandava, except for the masking absence\r\n\r\n```python\r\ninput_batch = [\"<s>It <mask> retriever. My <mask> cute </s>\", ... ]\r\ndecoder_input_batch = [\"</s><s>My dog is cute. It is a golden retriever\", ...]\r\nlabels_batch = [\"<s>My dog is cute. It is a golden retriever</s>\", ...]\r\n```\r\n\r\nBTW: This `</s>` token at the beginning of decode's input is kind of weird to me, but it's inherited from the fairseq original code. If you wanna train the model from scratch with random weights I think you can go without this... or maybe this trick is important for convergence, we never know :grin:",
"Will only 15% mask in the encoder input cause some kind of leakage? The language model in the decoder cannot learn correctly",
"If anyone wants to train their MBART model then feel free to use this. \r\nhttps://github.com/prajdabre/yanmtt\r\n\r\nContributions are welcome!",
"> Sorry for the intrusion, but I think your values are almost correct @swethmandava, except for the masking absence\r\n> \r\n> ```python\r\n> input_batch = [\"<s>It <mask> retriever. My <mask> cute </s>\", ... ]\r\n> decoder_input_batch = [\"</s><s>My dog is cute. It is a golden retriever\", ...]\r\n> labels_batch = [\"<s>My dog is cute. It is a golden retriever</s>\", ...]\r\n> ```\r\n> \r\n> BTW: This `</s>` token at the beginning of decode's input is kind of weird to me, but it's inherited from the fairseq original code. If you wanna train the model from scratch with random weights I think you can go without this... or maybe this trick is important for convergence, we never know 😁\r\n\r\nI have a non-natural language dataset where I haven't actually been including `<s>` and `</s>` since they don't add any value (and need to be removed later anyway). To work with that, should I insert a pad token at the start of the `decoder_input` representation (and truncate to max_length)?",
"> So from the paper: https://arxiv.org/pdf/1910.13461.pdf, you can see that Bart is trained on denoising input sequences in almost any possible way.\r\n> \r\n> One way could be for `BartForConditionalGeneration`:\r\n> \r\n> ```python\r\n> from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig\r\n> \r\n> tok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\n> model = BartForConditionalGeneration(BartConfig())\r\n> \r\n> input_string = \"My dog is <mask> </s>\"\r\n> decoder_input_string = \"<s> My dog is cute\"\r\n> labels_string = \"My dog is cute </s>\"\r\n> \r\n> input_ids = tok(input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\n> decoder_input_ids =tok(decoder_input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\n> labels = tok(labels_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\n> \r\n> loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n> ```\r\n\r\nHi, do you have a script to build the training dataset of BART pertain, thanks",
"@patrickvonplaten @sshleifer Did anyone ever come around to creating a notebook/script for BART pretraining? (In a linked issue you mentioned it was on the to-do list.)\r\n\r\nThe core difficulty is having a canonical implementation for the data preprocessing (BART is more than just token masking, I believe: e.g.,span masking, shuffling). But a full pretrain pipeline here or in fairseq is also sorely missing. ",
"Sadly not :-/ We now have on for Flax in #18297 - could you try to copy-paste the preprocessing logic into a PyTorch one maybe? ",
"@patrickvonplaten I've been porting the fairseq implementation to a PyTorch dataloader format. I found that the Flax implementation in HF lacks adding noise for 0-length spans and has some slightly diverging implementation so it was more straightforward to start from the fairseq implementation. I am now especially testing the data processing to get it as close as possible to fairseq's implementation (although it is my believe that [there's a bug in their code](https://github.com/facebookresearch/fairseq/issues/4695)).\r\n\r\nI would like to add a full pytorch example for DLM training of BART in the coming days/weeks but I could use some code reviews in doing that to feel more comfortable. Would that be possible?",
"Sure, happy to take a look! ",
"Hi\r\n\r\nI remember posting this a year ago but I've written an entire toolkit for this purpose. Feel free to use it. https://github.com/prajdabre/yanmtt\r\n\r\nI've also created a simple notebook for the same (scroll to the pretraining part): https://colab.research.google.com/drive/1ovlA_h0ggblawqR-yCgRs3uRjxFJ8K0l?usp=sharing\r\n\r\n",
"Hi Raj, thank you for this. I had come across it but your script seems to have a lot of additional things going on so that it is hard to extract the basics. I also found that you implement word/span masking but not the other things like adding noise or randomly swap a masked token for a random token, so not _completely_ like the original implementation (but correct me if I'm wrong!) .\r\n\r\nI think your library can be very useful to be used as a separate library, thanks! In addition I'll try add a PR in `transformers` for an succinct example to use within transformers with the `Trainer`, with data processing close the `fairseq` implementation.",
"Hi,\r\n\r\nMy focus was more on mbart and mt5 which looked only at span masking and reordering. I'm not sure if token replacement will have that big of an impact but can be easily implemented in 1 line. To my understanding, span masking is responsible for majority of the gains. The notebook contains a more watered down version of the masking method in my toolkit. You could consider that version and build on top of it easily.",
"Hey guys, I would want to know how to pre-training BART model from scratch. Anyone who know about this? BART, pegasus or other text summarization models are okay for me."
] | 1,592 | 1,676 | 1,592 | CONTRIBUTOR | null | Can I training a bart model from scratch by transformers? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5096/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5095/comments | https://api.github.com/repos/huggingface/transformers/issues/5095/events | https://github.com/huggingface/transformers/issues/5095 | 640,768,610 | MDU6SXNzdWU2NDA3Njg2MTA= | 5,095 | Addition of VisualBERT | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This is very interesting!",
"This has been proposed before as a separate issue but no action was taken. Hence, I thought I'll start implementing some of the multi-modal models one by one.",
"Please let @liunian-harold-li and me know if you need any help. We can also provide the pre-trained models. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,614 | 1,599 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
The VisualBERT model is used for multi-modal processing when the modes of images and text are present. It takes in object detection features from images, and combines them with textual embeddings from the pre-trained BERT models, pre-trained the whole thing on COCO image captioning data, using a similar MLM task as BERT. It has been shown to work well on several multi-modal tasks such as VQA, VCR, NLVR, etc.
<!-- Important information -->
## Open source status
The source code presented along with the paper can be found at
https://github.com/uclanlp/visualbert
* [x] the model implementation is available: (give details)
The model implementation can be found on the GitHub repository, in the models section: https://github.com/uclanlp/visualbert/tree/master/models
This code was provided along with the paper.
Another implementation, which is slightly harder to understand because of complex dependencies, is implemented in the Facebook Research's MMF framework: https://github.com/facebookresearch/mmf/blob/master/mmf/models/visual_bert.py
* [x] the model weights are available: (give details)
The model checkpoints that the authors used are presented as drive links in the given repository, depending on which pre-training we want. There are several links on the README file of the GitHub repository.
* [x] who are the authors: (mention them, if possible by @gh-username)
- Kai-Wei Chang: @KaiWeiChang
- Liunian Harold Li: @liunian-harold-li
- Mark Yatskar
- Da Yin
- Cho-Jui Hsieh
- Kai-Wei Chang
I want to contribute the model myself, please let me know if this is the right avenue for this, and how I can contribute. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5095/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5095/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5094/comments | https://api.github.com/repos/huggingface/transformers/issues/5094/events | https://github.com/huggingface/transformers/issues/5094 | 640,740,906 | MDU6SXNzdWU2NDA3NDA5MDY= | 5,094 | Different output from model on CPU and GPU | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I don't think it's possible to have two different hardware perform *exactly* the same. A 1e-7 precision is already very high. Out of curiosity, why do you need such a high precision?\r\n\r\nNevertheless, this is more of a pytorch-specific question rather than transformers-related. I'll close the issue here but feel free to link to an issue on the PyTorch forums if you do open one there!",
"@LysandreJik I haven't seen this issue with models not trained with transformers (though I probably just haven't looked hard enough).\r\n\r\nCan you give me more info on why this is the case, and maybe point me to some relevant resources?\r\n\r\nAs to your second question-- we need a high precision because we're searching for adversarial examples that maximize model misprediction in our library [TextAttack](https://github.com/QData/TextAttack). This precision error caused a different search outcome with a CPU and GPU. So, for some pair of sequences $a$ and $b$, the model on the CPU predicted $a$ 'more correctly' than it predicted $b$, and the model on the GPU predicted $b$ more correctly than $a$. Our automated tests caught this issue. \r\n\r\nDo you have a suggestion on how to fix it? Should we just truncate model scores to 7 decimal places? That feels like a crude fix."
] | 1,592 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
I trained models using the `run_glue.py` script and uploaded them to the model hub. I've realized that their output slightly differs between when I do inference on the CPU and the GPU.
Absolute errors are small – on the order of `1e-7` – but that turns out to be too much for my use case.
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
```python
(torch) qcuda8 04:47 PM > python
Python 3.7.7 (default, Mar 26 2020, 15:48:22)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> model_cpu = transformers.AutoModelForSequenceClassification.from_pretrained('textattack/bert-base-uncased-MNLI')
>>> model_gpu = transformers.AutoModelForSequenceClassification.from_pretrained('textattack/bert-base-uncased-MNLI').to('cuda')
>>>
>>> premise = "Among these are the red brick Royal Palace, which now houses the Patan Museum (Nepal's finest and most modern museum), and, facing the palace across the narrow brick plaza, eight temples of different styles and sizes."
>>> hypothesis = "The Patan Museum is down the street from the red brick Royal Palace."
>>>
>>> tokenizer = transformers.AutoTokenizer.from_pretrained('textattack/bert-base-uncased-MNLI')
>>> encoded_text = tokenizer.encode_plus((premise, hypothesis), return_tensors='pt')
>>> encoded_text['input_ids']
tensor([[101, 100, 100, 102]])
>>> encoded_text_cuda = {k: v.cuda() for k,v in encoded_text.items()}
>>> model_gpu(**encoded_text_cuda)[0].squeeze().tolist()
[-1.0867613554000854, 0.6688923239707947, 0.30274006724357605]
>>> model_cpu(**encoded_text)[0].squeeze().tolist()
[-1.0867608785629272, 0.6688917279243469, 0.3027404248714447]
```
## Expected behavior
I want the model output between the CPU model and GPU model to be the same.
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-3.10.0-693.el7.x86_64-x86_64-with-centos-7.4.1708-Core
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5094/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5094/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5093/comments | https://api.github.com/repos/huggingface/transformers/issues/5093/events | https://github.com/huggingface/transformers/pull/5093 | 640,724,143 | MDExOlB1bGxSZXF1ZXN0NDM2MDgxMTk1 | 5,093 | [style] add pandas to setup.cfg | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=h1) Report\n> Merging [#5093](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90c833870c78bb3d5d807a9a3e6a40d24bf2302b&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5093 +/- ##\n=======================================\n Coverage 77.28% 77.28% \n=======================================\n Files 133 133 \n Lines 22134 22134 \n=======================================\n Hits 17107 17107 \n Misses 5027 5027 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=footer). Last update [90c8338...eae4841](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Need to add pandas to setup.cfg. Otherwise, for people that have pandas installed locally isort tries to change `eli5_utils.py` everytime.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5093/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5093",
"html_url": "https://github.com/huggingface/transformers/pull/5093",
"diff_url": "https://github.com/huggingface/transformers/pull/5093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5093.patch",
"merged_at": 1592426358000
} |
https://api.github.com/repos/huggingface/transformers/issues/5092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5092/comments | https://api.github.com/repos/huggingface/transformers/issues/5092/events | https://github.com/huggingface/transformers/pull/5092 | 640,717,563 | MDExOlB1bGxSZXF1ZXN0NDM2MDc1Njc5 | 5,092 | [MarianTokenizer] Switch to sacremoses for punc normalization | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=h1) Report\n> Merging [#5092](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20fa82898495f516b221115fc3ef9ec8ebf50b1e&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5092 +/- ##\n==========================================\n+ Coverage 77.21% 77.29% +0.07% \n==========================================\n Files 133 133 \n Lines 22134 22134 \n==========================================\n+ Hits 17091 17108 +17 \n+ Misses 5043 5026 -17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <100.00%> (+0.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=footer). Last update [20fa828...e30eaf5](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Attempt at fixing #4491 using @jpcorb20 's solution | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5092",
"html_url": "https://github.com/huggingface/transformers/pull/5092",
"diff_url": "https://github.com/huggingface/transformers/pull/5092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5092.patch",
"merged_at": 1592425866000
} |
https://api.github.com/repos/huggingface/transformers/issues/5091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5091/comments | https://api.github.com/repos/huggingface/transformers/issues/5091/events | https://github.com/huggingface/transformers/issues/5091 | 640,712,494 | MDU6SXNzdWU2NDA3MTI0OTQ= | 5,091 | encode_plus wrongly tokenizing a symbol | {
"login": "IbtihalFerwana",
"id": 41843953,
"node_id": "MDQ6VXNlcjQxODQzOTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/41843953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IbtihalFerwana",
"html_url": "https://github.com/IbtihalFerwana",
"followers_url": "https://api.github.com/users/IbtihalFerwana/followers",
"following_url": "https://api.github.com/users/IbtihalFerwana/following{/other_user}",
"gists_url": "https://api.github.com/users/IbtihalFerwana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IbtihalFerwana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IbtihalFerwana/subscriptions",
"organizations_url": "https://api.github.com/users/IbtihalFerwana/orgs",
"repos_url": "https://api.github.com/users/IbtihalFerwana/repos",
"events_url": "https://api.github.com/users/IbtihalFerwana/events{/privacy}",
"received_events_url": "https://api.github.com/users/IbtihalFerwana/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What tokenizer are you using? Is it one already trained? If so, which? If not, on what did you train your tokenizer? What is your code? What is your output?\r\n\r\nIf possible, please fill the template. It's much easier to help you if you provide all the information we need.",
"1. Trained using a pretrained biobert for NER task:\r\n`tokenizer = BertTokenizer.from_pretrained(\"monologg/biobert_v1.0_pubmed_pmc\")`\r\n\r\n2. encode_plus method\r\n`encoded_dict = tokenizer.encode_plus(\r\n sent_str, # Sentence to encode.\r\n add_special_tokens = True, \r\n max_length = 75, \r\n pad_to_max_length = True,\r\n return_attention_mask = True, \r\n return_tensors = 'pt', \r\n )`\r\n3. Dataset of \r\n> linnaeus-IOB: \r\nincludes in one line `>= label O`\r\n\r\n3. Current output\r\nthe line of `>=` is split into two symbols without hashes\r\nI get \r\n`> label O`\r\n`= label O`",
"Same for the bio dataset BC4CHEMD-IOBES, it has (R), and the tokenizer split them into three tokens without hashes",
"Right, I fail to understand why you think this is wrongly tokenized? This tokenizer does not have the token `>=` in its vocabulary.\r\n\r\nYou can check with:\r\n\r\n```py\r\n\">\" in tokenizer.get_vocab() # Returns True\r\n\">=\" in tokenizer.get_vocab() # returns False\r\n```",
"@LysandreJik yes sir, thank you\r\n\r\nBut I'm trying to understand why `encode_plus` did not add hashes `##` before `=` while tokenization\r\n\r\nI was expecting to see \r\n`>`\r\n`##=`\r\nso they would relate to the same token\r\nbut that was not the case\r\n\r\nIs my question clear?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,600 | 1,600 | NONE | null | When I used a the bio dataset (linnaeus-IOB) to train bert on, the dataset includes the symbol (>=) [greater than or equal], the tokenizer is separating them into two tokens
**Expected output is either**:
`>`
`##=`
**or**
`>= `
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5091/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5090/comments | https://api.github.com/repos/huggingface/transformers/issues/5090/events | https://github.com/huggingface/transformers/pull/5090 | 640,708,393 | MDExOlB1bGxSZXF1ZXN0NDM2MDY4MTQw | 5,090 | minor spelling correction in script execution command - movement pruning | {
"login": "pranavpawar3",
"id": 39311422,
"node_id": "MDQ6VXNlcjM5MzExNDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/39311422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranavpawar3",
"html_url": "https://github.com/pranavpawar3",
"followers_url": "https://api.github.com/users/pranavpawar3/followers",
"following_url": "https://api.github.com/users/pranavpawar3/following{/other_user}",
"gists_url": "https://api.github.com/users/pranavpawar3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranavpawar3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranavpawar3/subscriptions",
"organizations_url": "https://api.github.com/users/pranavpawar3/orgs",
"repos_url": "https://api.github.com/users/pranavpawar3/repos",
"events_url": "https://api.github.com/users/pranavpawar3/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranavpawar3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | actual script name - counts_parameters.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5090/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5090",
"html_url": "https://github.com/huggingface/transformers/pull/5090",
"diff_url": "https://github.com/huggingface/transformers/pull/5090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5090.patch",
"merged_at": 1592424523000
} |
https://api.github.com/repos/huggingface/transformers/issues/5089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5089/comments | https://api.github.com/repos/huggingface/transformers/issues/5089/events | https://github.com/huggingface/transformers/issues/5089 | 640,703,022 | MDU6SXNzdWU2NDA3MDMwMjI= | 5,089 | Is there a helper script to preprocess data for T5 for masked language modeling? | {
"login": "abhisheksgumadi",
"id": 1021734,
"node_id": "MDQ6VXNlcjEwMjE3MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheksgumadi",
"html_url": "https://github.com/abhisheksgumadi",
"followers_url": "https://api.github.com/users/abhisheksgumadi/followers",
"following_url": "https://api.github.com/users/abhisheksgumadi/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheksgumadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheksgumadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheksgumadi/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheksgumadi/orgs",
"repos_url": "https://api.github.com/users/abhisheksgumadi/repos",
"events_url": "https://api.github.com/users/abhisheksgumadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheksgumadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Not yet sadly - it's on my ToDo list. Hope to be able to work on it soon",
"I am working on a script for T5 based upon the current run_language_modeling.py, maybe I can share that once I am done and someone can confirm if it works as expected?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, I'm working in the same task. [Here](https://github.com/huggingface/transformers/issues/7451) you can see my code if it helps!"
] | 1,592 | 1,601 | 1,598 | NONE | null | Hi Team
Thanks for the wonderful HuggingFace library !
I am now working with T5 on my own dataset. I want to know if there is any helper script that can automatically take text and mask a random set of tokens and also generate the expected output sequence for the pretraining unsupervised language modeling task.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5089/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5088/comments | https://api.github.com/repos/huggingface/transformers/issues/5088/events | https://github.com/huggingface/transformers/issues/5088 | 640,677,523 | MDU6SXNzdWU2NDA2Nzc1MjM= | 5,088 | Image GPT | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I'd like a google colab of it ",
"Hey @minimaxir! Here's a [colab](https://colab.research.google.com/github/apeguero1/image-gpt/blob/master/Transformers_Image_GPT.ipynb) which loads the weights into a subclass of `GPT2LMHeadModel` and demonstrates unconditional image generation and conditional image completion. \r\n\r\nSome differences I've found between Image-GPT and GPT2 which are reflected in the subclass. \r\n\r\n1) Image-GPT layer normalization doesn't subtract off the mean\r\n2) different activations used in the MLP\r\n3) In Image-GPT, the input and output embeddings are not tied\r\n4) Image-GPT has an extra learned \"sos\" token embedding which is concatenated at the beginning of the sequence\r\n5) The GPT2 `[n_embd, 3*n_embd]` dimensional linear layer, `c_attn`, which produces queries, keys, and values is instead split into 3 separate linear layers each with dimension `[n_head, n_embd/n_head, n_embd]` in Image-GPT (this only affects how to load the weights and not the actual model).\r\n6) In Image-GPT, the `conv1d` module doesn't have a bias term\r\n\r\nSo what's our next step to add this to the repo?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@apeguero1 we have an \"Adding a new model\" checklist at https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,604 | 1,604 | NONE | null | # 🌟 New model addition
## Model description
OpenAI just announced Image GPT: https://openai.com/blog/image-gpt/
Although image rendering would be out of scope for Transformers, the RGB generation would still be in scope and it would be best to port the weights to a `GPT2LMModel`.
However, it's not immediately clear here how the tokenization is implemented in the downloaded model. (no separate `vocab.json`)
## Open source status
* [ ] the model implementation is available: https://github.com/openai/image-gpt
* [ ] the model weights are available: see README above
* [ ] who are the authors: @openai
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5088/reactions",
"total_count": 34,
"+1": 21,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 7
} | https://api.github.com/repos/huggingface/transformers/issues/5088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5087/comments | https://api.github.com/repos/huggingface/transformers/issues/5087/events | https://github.com/huggingface/transformers/issues/5087 | 640,651,743 | MDU6SXNzdWU2NDA2NTE3NDM= | 5,087 | Why does the T5Tokenizer prepend and '_' to every token? | {
"login": "abhisheknovoic",
"id": 62595485,
"node_id": "MDQ6VXNlcjYyNTk1NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheknovoic",
"html_url": "https://github.com/abhisheknovoic",
"followers_url": "https://api.github.com/users/abhisheknovoic/followers",
"following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheknovoic/orgs",
"repos_url": "https://api.github.com/users/abhisheknovoic/repos",
"events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheknovoic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the T5 tokenizer is a SentencePiece tokenizer, and that's the way SentencePiece works. This underscore means that it's the start of a word. When it's not the start of a word, it's not prepended by anything.\r\n\r\nI think you're thinking of the # symbol because you're used to the BertTokenizer (wordpiece). SentencePiece works a bit differently!"
] | 1,592 | 1,592 | 1,592 | NONE | null | I am using the T5Tokenizer as follows:
```
In [119]: tokenizer = T5Tokenizer.from_pretrained('t5-small')
In [120]: input_ids = tokenizer.encode_plus('I love my dog', return_tensors='pt')
In [121]: [tokenizer.convert_ids_to_tokens([ele]) for ele in input_ids['input_ids'][0]]
Out[121]: [['▁I'], ['▁love'], ['▁my'], ['▁dog']]
```
I even tried another one as follows:
```
In [129]: [tokenizer.convert_ids_to_tokens([ele]) for ele in input_ids['input_ids'][0]]
Out[129]: [['▁I'], ['▁love'], ['▁my'], ['▁school'], ['▁National'], ['x'], ['y'], ['z']]
```
Above, shouldn't 'x', 'y', and 'z' have # prepended to them as they are part of the same word?
Why do I see an underscore before every token? Is this related to how it is sent inside T5? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5087/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5087/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5086/comments | https://api.github.com/repos/huggingface/transformers/issues/5086/events | https://github.com/huggingface/transformers/pull/5086 | 640,638,197 | MDExOlB1bGxSZXF1ZXN0NDM2MDEwMzkw | 5,086 | SummarizationPipeline: init required task name | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=h1) Report\n> Merging [#5086](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/70bc3ead4f0b08e8cadd1805ada2a22f0c302399&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5086 +/- ##\n=======================================\n Coverage 77.26% 77.27% \n=======================================\n Files 133 133 \n Lines 22146 22149 +3 \n=======================================\n+ Hits 17111 17115 +4 \n+ Misses 5035 5034 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.41% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=footer). Last update [70bc3ea...ca3ad69](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think all Pipelines have this problem, not just the `summarization` pipeline, no? \r\n\r\nThe only time the `task` string is ever really used is in the `class Pipeline(_ScikitCompat):` to call the correct task specific config parameters. \r\n\r\nI would actually propose to change the `task` string logic here a bit so that we give every pipeline class a static string variable `task` and don't pass it in the init. For summarization, e.g.:\r\n\r\n```python\r\nclass SummarizationPipeline(Pipeline):\r\n\r\n task = \"summarization\"\r\n\r\n def __call__(...):\r\n ...\r\n```\r\n\r\nThen in the `Pipeline` class we could have the following logic:\r\n\r\n```python\r\nclass Pipeline:\r\n\r\n task = None\r\n\r\n def __init__(...):\r\n ...\r\n assert self.task in SUPPORTED_TASKS.values(), f\"{self.task} does not exist\"\r\n```\r\n\r\nIMO, \"task\" is not really part of the instantiated object, but more of the class itself. Also, I think a model should always only have one default configuration per task. I think T5 is already an exception in that it can handle multiple task and I don't really see why T5 for example would need two different default configs for summarization. \r\nWe can always overwrite the config parameters when calling the model, so I don't think we restrict ourselves too much with this design and having multiple default configs in the config file would quickly make them unreadable.\r\n\r\nWhat do you think @julien-c ?",
"I'm not following everything here @patrickvonplaten :)\r\n\r\nMerging this like that for now but feel free to refine in the future"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | Otherwise, can't do:
```python
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
model = AutoModelWithLMHead.from_pretrained("facebook/bart-large-cnn")
p = SummarizationPipeline(model=model, tokenizer=tokenizer)
p("Long boring text to summarize, etc. etc.")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5086/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5086",
"html_url": "https://github.com/huggingface/transformers/pull/5086",
"diff_url": "https://github.com/huggingface/transformers/pull/5086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5086.patch",
"merged_at": 1592637391000
} |
https://api.github.com/repos/huggingface/transformers/issues/5085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5085/comments | https://api.github.com/repos/huggingface/transformers/issues/5085/events | https://github.com/huggingface/transformers/pull/5085 | 640,543,849 | MDExOlB1bGxSZXF1ZXN0NDM1OTM1MjM4 | 5,085 | Add missing arg in 02-transformers notebook | {
"login": "pri-ax",
"id": 54854789,
"node_id": "MDQ6VXNlcjU0ODU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/54854789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pri-ax",
"html_url": "https://github.com/pri-ax",
"followers_url": "https://api.github.com/users/pri-ax/followers",
"following_url": "https://api.github.com/users/pri-ax/following{/other_user}",
"gists_url": "https://api.github.com/users/pri-ax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pri-ax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pri-ax/subscriptions",
"organizations_url": "https://api.github.com/users/pri-ax/orgs",
"repos_url": "https://api.github.com/users/pri-ax/repos",
"events_url": "https://api.github.com/users/pri-ax/events{/privacy}",
"received_events_url": "https://api.github.com/users/pri-ax/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=h1) Report\n> Merging [#5085](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5085 +/- ##\n==========================================\n- Coverage 77.24% 77.22% -0.02% \n==========================================\n Files 133 133 \n Lines 22146 22146 \n==========================================\n- Hits 17107 17103 -4 \n- Misses 5039 5043 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.81% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=footer). Last update [7291ea0...0092c55](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, I agree with the typo change, but there's no need for the `from_tf` flag. `bert-base-cased` is available in both TensorFlow and PyTorch, so there's no need for the flag.",
"@LysandreJik you're right! I was getting an error unless I explicitly passed `from_tf=True` yesterday, but not today. Since I can't reproduce it, I'll remove that change. "
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Add missing `from_tf=True` arg when creating the model `AutoModel.from_pretrained()` to avoid an OSError from loading a PyTorch model from a TF 2.0 checkpoint.
Also fixed two small typos in markdown text. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5085",
"html_url": "https://github.com/huggingface/transformers/pull/5085",
"diff_url": "https://github.com/huggingface/transformers/pull/5085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5085.patch",
"merged_at": 1592521444000
} |
https://api.github.com/repos/huggingface/transformers/issues/5084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5084/comments | https://api.github.com/repos/huggingface/transformers/issues/5084/events | https://github.com/huggingface/transformers/pull/5084 | 640,510,103 | MDExOlB1bGxSZXF1ZXN0NDM1OTA3NDg0 | 5,084 | Update installation page and add contributing to the doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=h1) Report\n> Merging [#5084](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5084 +/- ##\n==========================================\n+ Coverage 77.24% 77.26% +0.02% \n==========================================\n Files 133 133 \n Lines 22146 22146 \n==========================================\n+ Hits 17107 17112 +5 \n+ Misses 5039 5034 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5084/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5084/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5084/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.77%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=footer). Last update [7291ea0...e9e1ea6](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This PR simplifies the installation page, adds the mention of TF/PT installation with one command only (for CPU) and adds a test that the installation was successful.
The tests mention is moved to CONTRIBUTING. The mention of the tokenization process for OpenAI GPT is moved to that model doc page.
It also adds the contributing guide to the documentation with a simlink (add to fix a few links to make it work). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5084/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5084",
"html_url": "https://github.com/huggingface/transformers/pull/5084",
"diff_url": "https://github.com/huggingface/transformers/pull/5084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5084.patch",
"merged_at": 1592416871000
} |
https://api.github.com/repos/huggingface/transformers/issues/5083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5083/comments | https://api.github.com/repos/huggingface/transformers/issues/5083/events | https://github.com/huggingface/transformers/pull/5083 | 640,473,621 | MDExOlB1bGxSZXF1ZXN0NDM1ODc3NDgy | 5,083 | updated hans eval instructions | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=h1) Report\n> Merging [#5083](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **decrease** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5083 +/- ##\n==========================================\n- Coverage 77.24% 76.84% -0.40% \n==========================================\n Files 133 133 \n Lines 22146 22146 \n==========================================\n- Hits 17107 17019 -88 \n- Misses 5039 5127 +88 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `56.09% <0.00%> (-19.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.94%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=footer). Last update [7291ea0...b485684](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | I've updated the info regarding how hans evaluation can be carried out.
I've also renamed `run_hans.py` to `test_hans.py` to restore the previous file convention and also indicate that HANS only supports evaluation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5083/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5083",
"html_url": "https://github.com/huggingface/transformers/pull/5083",
"diff_url": "https://github.com/huggingface/transformers/pull/5083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5083.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5082/comments | https://api.github.com/repos/huggingface/transformers/issues/5082/events | https://github.com/huggingface/transformers/pull/5082 | 640,471,377 | MDExOlB1bGxSZXF1ZXN0NDM1ODc1NTk5 | 5,082 | Add header and fix command | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=h1) Report\n> Merging [#5082](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5082 +/- ##\n=======================================\n Coverage 77.24% 77.25% \n=======================================\n Files 133 133 \n Lines 22146 22146 \n=======================================\n+ Hits 17107 17108 +1 \n+ Misses 5039 5038 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.95% <0.00%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=footer). Last update [7291ea0...cc54022](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This finishes to fix #4742 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5082/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5082",
"html_url": "https://github.com/huggingface/transformers/pull/5082",
"diff_url": "https://github.com/huggingface/transformers/pull/5082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5082.patch",
"merged_at": 1592408706000
} |
https://api.github.com/repos/huggingface/transformers/issues/5081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5081/comments | https://api.github.com/repos/huggingface/transformers/issues/5081/events | https://github.com/huggingface/transformers/issues/5081 | 640,419,497 | MDU6SXNzdWU2NDA0MTk0OTc= | 5,081 | 01_how-to-train.ipynb broken | {
"login": "orestisfl",
"id": 5778622,
"node_id": "MDQ6VXNlcjU3Nzg2MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5778622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orestisfl",
"html_url": "https://github.com/orestisfl",
"followers_url": "https://api.github.com/users/orestisfl/followers",
"following_url": "https://api.github.com/users/orestisfl/following{/other_user}",
"gists_url": "https://api.github.com/users/orestisfl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orestisfl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orestisfl/subscriptions",
"organizations_url": "https://api.github.com/users/orestisfl/orgs",
"repos_url": "https://api.github.com/users/orestisfl/repos",
"events_url": "https://api.github.com/users/orestisfl/events{/privacy}",
"received_events_url": "https://api.github.com/users/orestisfl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@orestisfl thanks for raising this, I also was scratching my head as the same tokenizer.save(\"EsperBERTo\") worked for me a few days ago and not anymore. I could save the tokenizer using tokenizer.save(\"EsperBERTo/vocab.txt\"), but then I can't load it. If I try to load I get:\r\n```\r\nTypeError: sep_token not found in the vocabulary\r\n```\r\nI'm using BertWordPieceTokenizer (not ByteLevelBPETokenizer used in your example) \r\n\r\nIt worked with tokenizers version 0.7.0, I just checked - I got version 0.8.0rc1 currently installed.\r\nI'll downgrade to 0.7.0 for now. \r\n",
"Was this BC intended @n1t0?",
"Yes, `tokenizers` `0.8.0` introduces the full tokenizer serialization, whereas before it saved the \"model\" only (vocab.json + merges.txt for BPE). So the save method should be used like that: `.save(\"tokenizer.json\")` and it saves the entire tokenizer to a JSON file.\r\nWe need to update the Notebook to use this new serialization method, but in the meantime, the only thing needed to make it work exactly like before is to replace:\r\n```python\r\n!mkdir EsperBERTo\r\ntokenizer.save(\"EsperBERTo\")\r\n```\r\nby\r\n```python\r\n!mkdir EsperBERTo\r\ntokenizer.save_model(\"EsperBERTo\")\r\n```",
"mind updating it before we forget? Thanks!",
"Sure, updated it with the quick change I mentioned. Will do a better update later.",
"Hey there, thanks for the quick fix!\r\nThe notebook now crashes for me during training, however:\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-19-0c647bc3a8b8> in <module>()\r\n----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()')\r\n\r\n11 frames\r\n<decorator-gen-60> in time(self, line, cell, local_ns)\r\n\r\n<timed eval> in <module>()\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in <listcomp>(.0)\r\n 112 probability_matrix = torch.full(labels.shape, self.mlm_probability)\r\n 113 special_tokens_mask = [\r\n--> 114 self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()\r\n 115 ]\r\n 116 probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)\r\n\r\nAttributeError: 'RobertaTokenizerFast' object has no attribute 'get_special_tokens_mask'\r\n```\r\n\r\nLet me know if I should make a separate issue ",
"This one is for me (this method was actually not working as intended under the hood for Fast-tokenizers...)",
"@thomwolf - just to confirm, I tried the change you made and it fixes a problem for me. Thanks!\r\n```\r\nAttributeError: 'BertTokenizerFast' object has no attribute 'get_special_tokens_mask'\r\n```"
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## To reproduce
Steps to reproduce the behavior:
1. Go to https://github.com/huggingface/transformers/tree/master/examples
2. Click the colab for `language-modeling`: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
3. Run notebook
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The notebook finishes succesfuly
What I get is:
```
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-5-52625a7c86e5> in <module>()
1 get_ipython().system('mkdir EsperBERTo')
----> 2 tokenizer.save("EsperBERTo")
/usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py in save(self, path, pretty)
330 A path to the destination Tokenizer file
331 """
--> 332 return self._tokenizer.save(path, pretty)
333
334 def to_str(self, pretty: bool = False):
Exception: Is a directory (os error 21)
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5081/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5081/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5080/comments | https://api.github.com/repos/huggingface/transformers/issues/5080/events | https://github.com/huggingface/transformers/pull/5080 | 640,367,093 | MDExOlB1bGxSZXF1ZXN0NDM1Nzg4ODIz | 5,080 | [docs] fix T5 training doc | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great thanks @patil-suraj ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=h1) Report\n> Merging [#5080](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebab096e864a619717a497089d864d10e21bc536&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5080 +/- ##\n=======================================\n Coverage 77.26% 77.27% \n=======================================\n Files 128 128 \n Lines 21854 21854 \n=======================================\n+ Hits 16886 16887 +1 \n+ Misses 4968 4967 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5080/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5080/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5080/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=footer). Last update [ebab096...2f8a93c](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | This PR fixes T5 training doc. In a recent commit `lm_labels` is changed to `labels`. Made the doc changes accordingly. Regarding issue #5079
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5080/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5080/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5080",
"html_url": "https://github.com/huggingface/transformers/pull/5080",
"diff_url": "https://github.com/huggingface/transformers/pull/5080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5080.patch",
"merged_at": 1592464590000
} |
https://api.github.com/repos/huggingface/transformers/issues/5079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5079/comments | https://api.github.com/repos/huggingface/transformers/issues/5079/events | https://github.com/huggingface/transformers/issues/5079 | 640,337,532 | MDU6SXNzdWU2NDAzMzc1MzI= | 5,079 | How do I pre-train the T5 model in HuggingFace library using my own text corpus? | {
"login": "abhisheknovoic",
"id": 62595485,
"node_id": "MDQ6VXNlcjYyNTk1NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheknovoic",
"html_url": "https://github.com/abhisheknovoic",
"followers_url": "https://api.github.com/users/abhisheknovoic/followers",
"following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheknovoic/orgs",
"repos_url": "https://api.github.com/users/abhisheknovoic/repos",
"events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheknovoic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, @abhisheknovoic this might help you https://huggingface.co/transformers/model_doc/t5.html#training\r\ncheck the Unsupervised denoising training section",
"@patil-suraj , do you mean this class? - T5ForConditionalGeneration \r\n\r\nAlso, at the top of the page, there is the following code:\r\n\r\n```input_ids = tokenizer.encode('The <extra_id_1> walks in <extra_id_2> park', return_tensors='pt')\r\nlm_labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt')\r\n# the forward function automatically creates the correct decoder_input_ids\r\nmodel(input_ids=input_ids, lm_labels=lm_labels)\r\n```\r\n\r\nAny idea which class is the model instantiated from? I could not find any class with lm_labels parameter.\r\n\r\nThanks",
"Yes, it's `T5ForConditionalGeneration`, and `lm_lables` is now changed to `labels`.\r\n\r\nPinging @patrickvonplaten for more details.",
"@patil-suraj , I tried the following code which throws an error. Any idea why? Thanks\r\n\r\n```In [32]: from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config\r\nIn [32]: from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config\r\n\r\nIn [33]: input_ids = tokenizer.encode('The <extra_id_1> walks in <extra_id_2> park', return_tensors='pt')\r\n\r\nIn [34]: labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt')\r\n\r\nIn [35]: config = T5Config()\r\n\r\nIn [36]: model = T5ForConditionalGeneration(config=config)\r\n\r\nIn [37]: model(input_ids=input_ids, lm_labels=labels)\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-37-6717b0ecfbf5> in <module>\r\n----> 1 model(input_ids=input_ids, lm_labels=labels)\r\n\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 530 result = self._slow_forward(*input, **kwargs)\r\n 531 else:\r\n--> 532 result = self.forward(*input, **kwargs)\r\n 533 for hook in self._forward_hooks.values():\r\n 534 hook_result = hook(self, input, result)\r\n\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_past_key_value_states, use_cache, lm_labels, inputs_embeds, decoder_inputs_embeds, head_mask)\r\n 1068 if lm_labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None:\r\n 1069 # get decoder inputs from shifting lm labels to the right\r\n-> 1070 decoder_input_ids = self._shift_right(lm_labels)\r\n 1071\r\n 1072 # If decoding with past key value states, only the last tokens\r\n\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_t5.py in _shift_right(self, input_ids)\r\n 609 assert (\r\n 610 decoder_start_token_id is not None\r\n--> 611 ), \"self.model.config.decoder_start_token_id has to be defined. In T5 it is usually set to the pad_token_id. See T5 docs for more information\"\r\n 612\r\n 613 # shift inputs to the right\r\n\r\nAssertionError: self.model.config.decoder_start_token_id has to be defined. In T5 it is usually set to the pad_token_id. See T5 docs for more information\r\n```\r\n\r\nMy versions are \r\n```\r\ntransformers==2.11.0\r\ntokenizers==0.7.0\r\n```",
"If you are using 2.11.0 then use `lm_labels` and if you are using master then use `labels`",
"@patil-suraj , thanks. I have installed the master version. It still complains with the same error. It seems like I need to specify something for the decoder_start_token_id. ",
"Ok, I got it working. I initialized config like follows:\r\n\r\n```\r\nconfig = T5Config(decoder_start_token_id=tokenizer.convert_tokens_to_ids(['<pad>'])[0])\r\n```",
"@patil-suraj , however, if we use the master branch, it seems like the tokenizers are broken. The T5 tokenizer doesn't tokenize the sentinel tokens correctly.",
"> @patil-suraj , do you mean this class? - T5ForConditionalGeneration\r\n> \r\n> Also, at the top of the page, there is the following code:\r\n> \r\n> ```\r\n> lm_labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt')\r\n> # the forward function automatically creates the correct decoder_input_ids\r\n> model(input_ids=input_ids, lm_labels=lm_labels)\r\n> ```\r\n> \r\n> Any idea which class is the model instantiated from? I could not find any class with lm_labels parameter.\r\n> \r\n> Thanks\r\n\r\nFeel free to also open a PR to correct `lm_labels` to `labels` in the comment :-) ",
"Just saw that @patil-suraj already did this - awesome thanks :-) \r\n\r\n@abhisheknovoic regarding the T5 tokenizer, can you post some code here that shows that T5 tokenization is broken (would be great if we can easily reproduce the error)",
"@patrickvonplaten it would be nice if we also add seq-2-seq (t5, bart) model pre-training examples in official examples \r\n\r\ncc @sshleifer ",
"Definitely!",
"Not sure if this should be a separate issue or not, but I am having difficulty training my own T5 tokenizer. When training a BPE tokenizer using the amazing huggingface tokenizer library and attempting to load it via\r\n\r\n```python\r\ntokenizer = T5Tokenizer.from_pretrained('./tokenizer')\r\n```\r\nI get the following error:\r\n```\r\nOSError: Model name './tokenizer/' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed './tokenizer/' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\r\n```\r\n\r\nI attempted to train a sentencepiece model instead using the, again amazing, huggingface tokenizer library, I get the same error because the `tokenizer.save` method does not actual generate the `spiece.model` file.\r\n\r\nAm I doing something wrong?\r\n\r\nTranformers version: 2.11.0\r\nTokenizers version: 0.7.0\r\n\r\nHere is a colab to reproduce the error: https://colab.research.google.com/drive/1WX1Q2Ze9k0SxFMLLv1aFgVGBFMEVTyDe?usp=sharing",
"@mfuntowicz @n1t0 - maybe you can help here",
"> Definitely!\r\n\r\nThe pre-training scripts would really help.original mesh transformer is very complicated to understand.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"We've released [nanoT5](https://github.com/PiotrNawrot/nanoT5) that reproduces T5-model (similar to BART) pre-training in PyTorch (not Flax). \r\n\r\nYou can take a look! \r\n\r\nAny suggestions are more than welcome."
] | 1,592 | 1,678 | 1,599 | NONE | null | Hello,
I understand how the T5 architecture works and I have my own large corpus where I decide to mask a sequence of tokens and replace them with sentinel tokens.
I also understand about the tokenizers in HuggingFace, specially the T5 tokenizer.
Can someone point me to a document or refer me to the class that I need to use to pretrain T5 model on my corpus using the masked language model approach?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5079/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5078/comments | https://api.github.com/repos/huggingface/transformers/issues/5078/events | https://github.com/huggingface/transformers/pull/5078 | 640,315,320 | MDExOlB1bGxSZXF1ZXN0NDM1NzQ1MTQy | 5,078 | Add BERT Loses Patience (Patience-based Early Exit) | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=h1) Report\n> Merging [#5078](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e4aaa4580515446cd5a2972ab42fec0b95819c84&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5078 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 133 133 \n Lines 22146 22146 \n=======================================\n+ Hits 17110 17111 +1 \n+ Misses 5036 5035 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=footer). Last update [e4aaa45...8e0cf02](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @LysandreJik , my worry is that PABEE is not a standard inference method. It cannot deal with batch inference. Also, it can only support classification & regression (no tagging, no summarization, etc.). \r\n\r\nFrom another aspect, as a researcher, when I try to do a little tweak with ALBERT, I won't be happy if someone adds some new stuff into the model and it'll add unnecessary burdens to the researchers. They'll definitely hate it. So IMO, it's better to implement them separately. Also, as @sshleifer suggested, I refactored the model to inherit `AlbertTransformer` and `AlbertModel`, etc.",
"> Hi @LysandreJik , my worry is that PABEE is not a standard inference method. It cannot deal with batch inference. Also, it can only support classification & regression (no tagging, no summarization, etc.).\r\n> \r\n> From another aspect, as a researcher, when I try to do a little tweak with ALBERT, I won't be happy if someone adds some new stuff into the model and it'll add unnecessary burdens to the researchers. They'll definitely hate it. So IMO, it's better to implement them separately. Also, as @sshleifer suggested, I refactored the model to inherit `AlbertTransformer` and `AlbertModel`, etc.\r\n\r\nSince I refactored the code with inheritance, I figure it is okay to use `adaptive_forward` since I won't have to overwrite the standard `forward` (which would be confusing since it's hard to tell which part is modified for the users). Also, it's better to preserve the standard `forward` so we can easily compare `adaptive_forward` to the standard `forward`.\r\n\r\n@sshleifer I copy most stuff from the original ALBERT & BERT modeling code so I think it also does not make sense if I refactor the parts I copied.\r\n\r\nRe. trainer, maybe we can refactor the code later? I think it's quite optional here but it requires a lot of changes on `run_glue_with_pabee.py`. Some part of me wonders if it is worthy.",
"The code you copied has gotten cleaned up since you copied it, hence the suggestions.",
"> LGTM pending suggestions, test.\n\nOkay, I’ll add a test"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Add BERT Loses Patience (Patience-based Early Exit) based on the paper https://arxiv.org/abs/2006.04152 and the official implementation https://github.com/JetRunner/PABEE
It's impossible to make PABEE's ALBERT and BERT compatible with the standard API (e.g., `run_glue.py` so I keep the modeling files in a separate directory, under `example/bert_loses_patience`, instead of putting them alongside `modeling_bert.py`) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5078/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5078/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5078",
"html_url": "https://github.com/huggingface/transformers/pull/5078",
"diff_url": "https://github.com/huggingface/transformers/pull/5078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5078.patch",
"merged_at": 1592631707000
} |
https://api.github.com/repos/huggingface/transformers/issues/5077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5077/comments | https://api.github.com/repos/huggingface/transformers/issues/5077/events | https://github.com/huggingface/transformers/issues/5077 | 640,296,368 | MDU6SXNzdWU2NDAyOTYzNjg= | 5,077 | Several problems with named entites predicted with the ner pipeline | {
"login": "Nighthyst",
"id": 38282930,
"node_id": "MDQ6VXNlcjM4MjgyOTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/38282930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nighthyst",
"html_url": "https://github.com/Nighthyst",
"followers_url": "https://api.github.com/users/Nighthyst/followers",
"following_url": "https://api.github.com/users/Nighthyst/following{/other_user}",
"gists_url": "https://api.github.com/users/Nighthyst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nighthyst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nighthyst/subscriptions",
"organizations_url": "https://api.github.com/users/Nighthyst/orgs",
"repos_url": "https://api.github.com/users/Nighthyst/repos",
"events_url": "https://api.github.com/users/Nighthyst/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nighthyst/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@Nighthyst thanks for summarizing these issues as I also ran into them.\r\nI was digging on this last weekend and I think maybe this could help:\r\n\r\nhttps://github.com/huggingface/tokenizers/pull/200\r\n\r\n\r\n>> Provide some more mappings on the Encoding in order to easily identify words after tokenization.\r\n\r\n>> It also exposes a method encode_tokenized on the BaseTokenizer to allow skipping the usual Normalizer and PreTokenizer.\r\nThis is especially useful for NER like datasets, where the pre-tokenization has already been done, and we want to attribute labels to pre-tokenized words.\r\n",
"Thanks for bringing this up. I can work on this on a separate PR after merging the PR that resolves the prior issue #4816.",
"some interesting finding:\r\n\r\nUsing a fast tokenizer solves the `[UNK]` issue. using one of your provided examples:\r\n\r\n```python\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"dbmdz/bert-large-cased-finetuned-conll03-english\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=True)\r\nnlp = TokenClassificationPipeline(model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=False)\r\n\r\nt=\"Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis .\"\r\n\r\nnlp(t)\r\n```\r\n\r\n```\r\n[{'word': 'PS', 'score': 0.9961145520210266, 'entity': 'I-ORG', 'index': 5},\r\n {'word': '##A', 'score': 0.9905584454536438, 'entity': 'I-ORG', 'index': 6},\r\n {'word': 'P', 'score': 0.997616708278656, 'entity': 'I-ORG', 'index': 7},\r\n {'word': '##eu', 'score': 0.9741767644882202, 'entity': 'I-ORG', 'index': 8},\r\n {'word': '##ge', 'score': 0.9928027391433716, 'entity': 'I-ORG', 'index': 9},\r\n {'word': '##ot', 'score': 0.9900722503662109, 'entity': 'I-ORG', 'index': 10},\r\n {'word': 'C', 'score': 0.9574489593505859, 'entity': 'I-ORG', 'index': 11},\r\n {'word': '##it', 'score': 0.824583113193512, 'entity': 'I-ORG', 'index': 12},\r\n {'word': '##ro', 'score': 0.7597800493240356, 'entity': 'I-ORG', 'index': 13},\r\n {'word': '##A', 'score': 0.953075647354126, 'entity': 'I-ORG', 'index': 14},\r\n {'word': '«', 'score': 0.6135829091072083, 'entity': 'I-ORG', 'index': 15}]\r\n```",
"@Nighthyst @dav009 Can you guys check if the above issues still persist after the recent PR merged (#4987)?",
"Hello @enzoampil, \r\n\r\nI updated transformers with master, with the command: \r\n\r\n`pip install --upgrade git+https://github.com/huggingface/transformers.git`\r\n\r\nThen I tried your tests and mine:\r\n\r\n```Python\r\nfrom transformers import pipeline\r\nNER_MODEL = \"mrm8488/bert-spanish-cased-finetuned-ner\"\r\nnlp_ner = pipeline(\"ner\", model=NER_MODEL,\r\n grouped_entities=True,\r\n tokenizer=(NER_MODEL, {\"use_fast\": False}))\r\n\r\nt = \"\"\"Consuelo Araújo Noguera, ministra de cultura del presidente Andrés Pastrana (1998.2002) fue asesinada por las Farc luego de haber permanecido secuestrada por algunos meses.\"\"\"\r\nnlp_ner(t)\r\n```\r\n\r\nI have the expected output : \r\n\r\n```\r\n[{'entity_group': 'B-PER',\r\n 'score': 0.9710702555520194,\r\n 'word': 'Consuelo Araújo Noguera'},\r\n {'entity_group': 'B-PER',\r\n 'score': 0.9997273534536362,\r\n 'word': 'Andrés Pastrana'},\r\n {'entity_group': 'B-ORG', 'score': 0.8589079678058624, 'word': 'Farc'}]\r\n```\r\n\r\nAnd for your other test : \r\n\r\n```Python\r\nnlp = pipeline('ner', grouped_entities=False)\r\nnlp(\"Enzo works at the the UN\")\r\n```\r\n\r\nOutput : \r\n\r\n```\r\n[{'word': 'En', 'score': 0.9968166351318359, 'entity': 'I-PER', 'index': 1},\r\n {'word': '##zo', 'score': 0.9957635998725891, 'entity': 'I-PER', 'index': 2},\r\n {'word': 'UN', 'score': 0.9986497163772583, 'entity': 'I-ORG', 'index': 7}]\r\n```\r\n\r\nAnd,\r\n\r\n```Python\r\nnlp2 = pipeline('ner', grouped_entities=True)\r\nnlp2(\"Enzo works at the the UN\")\r\n```\r\n\r\nOutput : \r\n\r\n```\r\n{'entity_group': 'I-PER', 'score': 0.9962901175022125, 'word': 'Enzo'},\r\n {'entity_group': 'I-ORG', 'score': 0.9986497163772583, 'word': 'UN'}]\r\n```\r\n\r\nHowever with my test : \r\n\r\n```Python\r\nimport torch\r\nfrom transformers import AutoModelForTokenClassification, AutoTokenizer\r\nfrom transformers import TokenClassificationPipeline\r\n\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"dbmdz/bert-large-cased-finetuned-conll03-english\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nnlp_not_grouped = TokenClassificationPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=False\r\n)\r\n\r\nnlp_grouped = TokenClassificationPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=True\r\n)\r\n\r\nseq1 = \"Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very\" \\\r\n \"close to the Manhattan Bridge.\"\r\n\r\nseq2 = \"In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification .\"\r\n\r\nseq3 = \"Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % \"\\\r\n\"on a reported basis and 10 . 4 % on a like - for - like basis .\"\r\n\r\nseq4 = \"To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of\"\\\r\n\" Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation\"\\\r\n\" Committee .\"\r\n\r\nsequences = [seq1, seq2, seq3, seq4]\r\n\r\nfor i, seq in enumerate(sequences):\r\n ngrouped, grouped = nlp_not_grouped(seq), nlp_grouped(seq)\r\n print(f\"===================== sentence n°{i+1}\")\r\n print(\"---Sentence---\")\r\n print(seq)\r\n print(\"---Not grouped entities---\")\r\n for ngent in ngrouped:\r\n print(ngent)\r\n print(\"---Grouped entities---\")\r\n for gent in grouped:\r\n print(gent)\r\n```\r\n\r\nI have this : \r\n\r\n```\r\n===================== sentence n°1\r\n---Sentence---\r\nHugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore veryclose to the Manhattan Bridge.\r\n---Not grouped entities---\r\n{'word': 'Hu', 'score': 0.9995108246803284, 'entity': 'I-ORG', 'index': 1}\r\n{'word': '##gging', 'score': 0.989597499370575, 'entity': 'I-ORG', 'index': 2}\r\n{'word': 'Face', 'score': 0.9979704022407532, 'entity': 'I-ORG', 'index': 3}\r\n{'word': 'Inc', 'score': 0.9993758797645569, 'entity': 'I-ORG', 'index': 4}\r\n{'word': 'New', 'score': 0.9993405938148499, 'entity': 'I-LOC', 'index': 11}\r\n{'word': 'York', 'score': 0.9991927742958069, 'entity': 'I-LOC', 'index': 12}\r\n{'word': 'City', 'score': 0.9993411302566528, 'entity': 'I-LOC', 'index': 13}\r\n{'word': 'D', 'score': 0.986336350440979, 'entity': 'I-LOC', 'index': 19}\r\n{'word': '##UM', 'score': 0.9396238923072815, 'entity': 'I-LOC', 'index': 20}\r\n{'word': '##BO', 'score': 0.9121386408805847, 'entity': 'I-LOC', 'index': 21}\r\n{'word': 'Manhattan', 'score': 0.9839190244674683, 'entity': 'I-LOC', 'index': 29}\r\n{'word': 'Bridge', 'score': 0.9924242496490479, 'entity': 'I-LOC', 'index': 30}\r\n---Grouped entities---\r\n{'entity_group': 'I-ORG', 'score': 0.9966136515140533, 'word': 'Hugging Face Inc'}\r\n{'entity_group': 'I-LOC', 'score': 0.9992914994557699, 'word': 'New York City'}\r\n{'entity_group': 'I-LOC', 'score': 0.9460329612096151, 'word': 'DUMBO'}\r\n{'entity_group': 'I-LOC', 'score': 0.9881716370582581, 'word': 'Manhattan Bridge'}\r\n===================== sentence n°2\r\n---Sentence---\r\nIn addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification .\r\n---Not grouped entities---\r\n{'word': 'B', 'score': 0.9997261762619019, 'entity': 'I-ORG', 'index': 5}\r\n{'word': '##la', 'score': 0.997683048248291, 'entity': 'I-ORG', 'index': 6}\r\n{'word': '##bla', 'score': 0.99888014793396, 'entity': 'I-ORG', 'index': 7}\r\n{'word': 'Group', 'score': 0.9992784261703491, 'entity': 'I-ORG', 'index': 8}\r\n{'word': 'ISO', 'score': 0.9711909890174866, 'entity': 'I-MISC', 'index': 14}\r\n{'word': 'T', 'score': 0.6591967344284058, 'entity': 'I-ORG', 'index': 16}\r\n{'word': '##S', 'score': 0.658642053604126, 'entity': 'I-MISC', 'index': 17}\r\n{'word': '##16', 'score': 0.5059574842453003, 'entity': 'I-MISC', 'index': 18}\r\n{'word': '##9', 'score': 0.5067382454872131, 'entity': 'I-MISC', 'index': 21}\r\n---Grouped entities---\r\n{'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'}\r\n{'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}\r\n{'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}\r\n{'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}\r\n{'entity_group': 'I-MISC', 'score': 0.5067382454872131, 'word': '##9'}\r\n===================== sentence n°3\r\n---Sentence---\r\nProduct sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis .\r\n---Not grouped entities---\r\n{'word': 'PS', 'score': 0.9970256686210632, 'entity': 'I-ORG', 'index': 5}\r\n{'word': '##A', 'score': 0.9927457571029663, 'entity': 'I-ORG', 'index': 6}\r\n{'word': 'P', 'score': 0.9980151653289795, 'entity': 'I-ORG', 'index': 7}\r\n{'word': '##eu', 'score': 0.9897757768630981, 'entity': 'I-ORG', 'index': 8}\r\n{'word': '##ge', 'score': 0.996147871017456, 'entity': 'I-ORG', 'index': 9}\r\n{'word': '##ot', 'score': 0.9928787350654602, 'entity': 'I-ORG', 'index': 10}\r\n{'word': '[UNK]', 'score': 0.5744695067405701, 'entity': 'I-ORG', 'index': 11}\r\n---Grouped entities---\r\n{'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot [UNK]'}\r\n===================== sentence n°4\r\n---Sentence---\r\nTo prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation Committee .\r\n---Not grouped entities---\r\n{'word': 'F', 'score': 0.9983997941017151, 'entity': 'I-ORG', 'index': 14}\r\n{'word': '##au', 'score': 0.9473735690116882, 'entity': 'I-ORG', 'index': 15}\r\n{'word': '##re', 'score': 0.9604568481445312, 'entity': 'I-ORG', 'index': 16}\r\n{'word': '##cia', 'score': 0.992807149887085, 'entity': 'I-ORG', 'index': 17}\r\n{'word': 'Board', 'score': 0.8452167510986328, 'entity': 'I-ORG', 'index': 20}\r\n{'word': 'of', 'score': 0.5921975374221802, 'entity': 'I-ORG', 'index': 21}\r\n{'word': 'Directors', 'score': 0.6778028607368469, 'entity': 'I-ORG', 'index': 22}\r\n{'word': 'Audi', 'score': 0.9764850735664368, 'entity': 'I-ORG', 'index': 30}\r\n{'word': '##t', 'score': 0.9692177772521973, 'entity': 'I-ORG', 'index': 31}\r\n{'word': 'Committee', 'score': 0.9959701299667358, 'entity': 'I-ORG', 'index': 32}\r\n{'word': 'Strategy', 'score': 0.9705951809883118, 'entity': 'I-ORG', 'index': 35}\r\n{'word': 'Committee', 'score': 0.994032621383667, 'entity': 'I-ORG', 'index': 36}\r\n{'word': 'A', 'score': 0.9764854907989502, 'entity': 'I-ORG', 'index': 39}\r\n{'word': '##oint', 'score': 0.7803319692611694, 'entity': 'I-ORG', 'index': 41}\r\n{'word': '##ments', 'score': 0.7828453779220581, 'entity': 'I-ORG', 'index': 42}\r\n{'word': 'and', 'score': 0.9625542163848877, 'entity': 'I-ORG', 'index': 43}\r\n{'word': 'Co', 'score': 0.9904180765151978, 'entity': 'I-ORG', 'index': 44}\r\n{'word': '##mp', 'score': 0.9140805602073669, 'entity': 'I-ORG', 'index': 45}\r\n{'word': '##ens', 'score': 0.8661588430404663, 'entity': 'I-ORG', 'index': 46}\r\n{'word': '##ation', 'score': 0.9150537252426147, 'entity': 'I-ORG', 'index': 47}\r\n{'word': 'Committee', 'score': 0.9888517260551453, 'entity': 'I-ORG', 'index': 48}\r\n---Grouped entities---\r\n{'entity_group': 'I-ORG', 'score': 0.9747593402862549, 'word': 'Faurecia'}\r\n{'entity_group': 'I-ORG', 'score': 0.7050723830858866, 'word': 'Board of Directors'}\r\n{'entity_group': 'I-ORG', 'score': 0.9805576602617899, 'word': 'Audit Committee'}\r\n{'entity_group': 'I-ORG', 'score': 0.9823139011859894, 'word': 'Strategy Committee'}\r\n{'entity_group': 'I-ORG', 'score': 0.9764854907989502, 'word': 'A'}\r\n{'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': '##ointments and Compensation Committee'}\r\n```\r\n\r\nIt seems like the problem is still here for sentence n°4 : the last group should be \"Appointments and Compensation Committee\". For sentence n°2 it should be : \"TS16949\" as MISC or ORG at least it predicts the T in ORG and the other part in MISC. Even if both parts don't have the same entity tag, the ORG part should have been in one group \"S16949\" at least I think.\r\n\r\nAlso @dav009 \"trick\" to solve the [UNK] issue seems to not be working anymore : \r\n\r\n```Python\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"dbmdz/bert-large-cased-finetuned-conll03-english\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=True)\r\nnlp = TokenClassificationPipeline(model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=False)\r\n\r\nt=\"Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis .\"\r\n\r\nnlp(t)\r\n```\r\n\r\nOutput : \r\n```\r\n\r\n[{'word': 'PS', 'score': 0.9970256686210632, 'entity': 'I-ORG', 'index': 5},\r\n {'word': '##A', 'score': 0.9927457571029663, 'entity': 'I-ORG', 'index': 6},\r\n {'word': 'P', 'score': 0.9980151653289795, 'entity': 'I-ORG', 'index': 7},\r\n {'word': '##eu', 'score': 0.9897757768630981, 'entity': 'I-ORG', 'index': 8},\r\n {'word': '##ge', 'score': 0.996147871017456, 'entity': 'I-ORG', 'index': 9},\r\n {'word': '##ot', 'score': 0.9928787350654602, 'entity': 'I-ORG', 'index': 10},\r\n {'word': '[UNK]',\r\n 'score': 0.5744695067405701,\r\n 'entity': 'I-ORG',\r\n 'index': 11}]\r\n```\r\n\r\nThe [UNK] token is back",
"For sentence 4, this is because the ##pp in “Appointments”, is not being tagged as an entity. This will require a separate PR that assumes that all the word pieces attached to a tagged entity token, should also be tagged with the same entity, whether or not it was tagged.",
"A similar situation is happening in sentence 2. The clue is in the value for “index”. You’ll notice that the tokens aren’t contiguous and so aren’t being grouped together. This implies that some middle word pieces aren’t being tagged as entities.",
"For the [UNK] issue, this “might” be because that word piece token was out of vocabulary and so gets converted to [UNK] at the decoding step.\n\nSince this happens before entity grouping, I think safe to say this is unrelated to entity grouping and is related to how the raw NER forward pass is handled.\n\nPerhaps we can separate this from the above issue? Both will require separate PR’s to address.",
"Actually you're right it seems that sentences n°2 and n°4 are showing a different issue : if the index is not contiguous (because a part is missing in the prediction : \"pp\" for n°4 and \"94\" for n°2) then the grouping fails. It's indeed a different issue.",
"> For sentence 4, this is because the ##pp in “Appointments”, is not being tagged as an entity. This will require a separate PR that assumes that all the word pieces attached to a tagged entity token, should also be tagged with the same entity, whether or not it was tagged.\r\n\r\nAlthough I agree that it could be solved in a next PR, shouldn't this more 'holistic' view be preferable (and be the default). If one token in a word is 'missed' but the other four (e.g. PER-PER-O-PER-PER) are an entity the whole word is an entity (and not two separate entities). We 'know' what the word-level comprehends the model doesn't",
"@HHoofs agree that this should be the default. If the \"word-level\" implementation is submitted as a PR, this should not be the default behaviour and should be explicitly set.",
"I agree with that, what I meant however was the following case: `Italy`\nLet's say that this consists of three subtokens: `_It`, `a`, `ly`\nIf the first and last tokens are assigned as Country en the middle as None, it would now result in a splitted output (if I understand correctly).\nI would suggest that the outputs of all three subtokens are averaged and than the highest output class is selected.",
"In pseudo-code, I would suggest the following (order):\r\n```\r\n...\r\n# first check if the user want to have grouped entities\r\nif self.grouped_entities:\r\n word_scores = []\r\n for token in tokens:\r\n # first input should always be a 'new word'\r\n if is_new_word(token):\r\n word_scores.append(score)\r\n score = np.zeros((0,?))\r\n score = np.sum(score, token['score'])\r\n # now you have a list of summed entity scores for each seperate word\r\n word_scores.argmax(axis=-1)\r\n ...\r\n \r\nelse:\r\n return ...\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,602 | 1,602 | NONE | null | # 🐛 Bug
## Information
Hello,
I am using the `bert-base-cased` model to predict named entities for a bunch of sentences (around 29 900). I am facing 3 main issues :
1. Residual '##' in grouped entities' word field (So they are not well grouped)
2. [UNK] (or [CLS]) tokens inside word fields
3. Missing syllables in the word fields
Model I am using (Bert, XLNet ...): Bert (`dbmdz/bert-large-cased-finetuned-conll03-english`)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: NER with my own unlabelled dataset
## To reproduce
I didn't find the official example for this so I made my own script with the `TokenClassificationPipeline` :
```Python
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
from transformers import TokenClassificationPipeline
model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
nlp_not_grouped = TokenClassificationPipeline(
model=model,
tokenizer=tokenizer,
grouped_entities=False
)
nlp_grouped = TokenClassificationPipeline(
model=model,
tokenizer=tokenizer,
grouped_entities=True
)
seq1 = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge."
seq2 = "In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification ."
seq3 = "Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % "\
"on a reported basis and 10 . 4 % on a like - for - like basis ."
seq4 = "To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of"\
" Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation"\
" Committee ."
sequences = [seq1, seq2, seq3, seq4]
for i, seq in enumerate(sequences):
ngrouped, grouped = nlp_not_grouped(seq), nlp_grouped(seq)
print(f"===================== sentence n°{i+1}")
print("---Sentence---")
print(seq)
print("---Not grouped entities---")
for ngent in ngrouped:
print(ngent)
print("---Grouped entities---")
for gent in grouped:
print(gent)
```
I have about 29 900 sentences. For each sentence I want to predict all the named entities in it and then locate them in the sentence. Once I have an entity, I use a regex to find it in the original sentence (before the tokenization step) like this :
```Python
start, stop = re.search(re.escape(ent['word']), sent).span()
```
Where `ent['word']` is the text of an entity found in a sentence. For instance, it can be `"London"` for the sentence (sent) `"London is really a great city"`. However I do this later with the grouped entities but since there are errors in it many are discarded because `re.search()` raises an exception (that I catch).
Steps to reproduce the behavior:
You just have to run my script to predict the entities for the four sentences. Here is what I get :
```Python
===================== sentence n°1
---Sentence---
Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore veryclose to the Manhattan Bridge.
---Not grouped entities---
{'word': 'Hu', 'score': 0.9995108246803284, 'entity': 'I-ORG', 'index': 1}
{'word': '##gging', 'score': 0.989597499370575, 'entity': 'I-ORG', 'index': 2}
{'word': 'Face', 'score': 0.9979704022407532, 'entity': 'I-ORG', 'index': 3}
{'word': 'Inc', 'score': 0.9993758797645569, 'entity': 'I-ORG', 'index': 4}
{'word': 'New', 'score': 0.9993405938148499, 'entity': 'I-LOC', 'index': 11}
{'word': 'York', 'score': 0.9991927742958069, 'entity': 'I-LOC', 'index': 12}
{'word': 'City', 'score': 0.9993411302566528, 'entity': 'I-LOC', 'index': 13}
{'word': 'D', 'score': 0.986336350440979, 'entity': 'I-LOC', 'index': 19}
{'word': '##UM', 'score': 0.9396238923072815, 'entity': 'I-LOC', 'index': 20}
{'word': '##BO', 'score': 0.9121386408805847, 'entity': 'I-LOC', 'index': 21}
{'word': 'Manhattan', 'score': 0.9839190244674683, 'entity': 'I-LOC', 'index': 29}
{'word': 'Bridge', 'score': 0.9924242496490479, 'entity': 'I-LOC', 'index': 30}
---Grouped entities---
{'entity_group': 'I-ORG', 'score': 0.9966136515140533, 'word': 'Hugging Face Inc'}
{'entity_group': 'I-LOC', 'score': 0.9992914994557699, 'word': 'New York City'}
{'entity_group': 'I-LOC', 'score': 0.9460329612096151, 'word': 'DUMBO'}
{'entity_group': 'I-LOC', 'score': 0.9881716370582581, 'word': 'Manhattan Bridge'}
===================== sentence n°2
---Sentence---
In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification .
---Not grouped entities---
{'word': 'B', 'score': 0.9997261762619019, 'entity': 'I-ORG', 'index': 5}
{'word': '##la', 'score': 0.997683048248291, 'entity': 'I-ORG', 'index': 6}
{'word': '##bla', 'score': 0.99888014793396, 'entity': 'I-ORG', 'index': 7}
{'word': 'Group', 'score': 0.9992784261703491, 'entity': 'I-ORG', 'index': 8}
{'word': 'ISO', 'score': 0.9711909890174866, 'entity': 'I-MISC', 'index': 14}
{'word': 'T', 'score': 0.6591967344284058, 'entity': 'I-ORG', 'index': 16}
{'word': '##S', 'score': 0.658642053604126, 'entity': 'I-MISC', 'index': 17}
{'word': '##16', 'score': 0.5059574842453003, 'entity': 'I-MISC', 'index': 18}
{'word': '##9', 'score': 0.5067382454872131, 'entity': 'I-MISC', 'index': 21}
---Grouped entities---
{'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'}
{'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}
{'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}
{'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}
===================== sentence n°3
---Sentence---
Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis .
---Not grouped entities---
{'word': 'PS', 'score': 0.9970256686210632, 'entity': 'I-ORG', 'index': 5}
{'word': '##A', 'score': 0.9927457571029663, 'entity': 'I-ORG', 'index': 6}
{'word': 'P', 'score': 0.9980151653289795, 'entity': 'I-ORG', 'index': 7}
{'word': '##eu', 'score': 0.9897757768630981, 'entity': 'I-ORG', 'index': 8}
{'word': '##ge', 'score': 0.996147871017456, 'entity': 'I-ORG', 'index': 9}
{'word': '##ot', 'score': 0.9928787350654602, 'entity': 'I-ORG', 'index': 10}
{'word': '[UNK]', 'score': 0.5744695067405701, 'entity': 'I-ORG', 'index': 11}
---Grouped entities---
{'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot [UNK]'}
===================== sentence n°4
---Sentence---
To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation Committee .
---Not grouped entities---
{'word': 'F', 'score': 0.9983997941017151, 'entity': 'I-ORG', 'index': 14}
{'word': '##au', 'score': 0.9473735690116882, 'entity': 'I-ORG', 'index': 15}
{'word': '##re', 'score': 0.9604568481445312, 'entity': 'I-ORG', 'index': 16}
{'word': '##cia', 'score': 0.992807149887085, 'entity': 'I-ORG', 'index': 17}
{'word': 'Board', 'score': 0.8452167510986328, 'entity': 'I-ORG', 'index': 20}
{'word': 'of', 'score': 0.5921975374221802, 'entity': 'I-ORG', 'index': 21}
{'word': 'Directors', 'score': 0.6778028607368469, 'entity': 'I-ORG', 'index': 22}
{'word': 'Audi', 'score': 0.9764850735664368, 'entity': 'I-ORG', 'index': 30}
{'word': '##t', 'score': 0.9692177772521973, 'entity': 'I-ORG', 'index': 31}
{'word': 'Committee', 'score': 0.9959701299667358, 'entity': 'I-ORG', 'index': 32}
{'word': 'Strategy', 'score': 0.9705951809883118, 'entity': 'I-ORG', 'index': 35}
{'word': 'Committee', 'score': 0.994032621383667, 'entity': 'I-ORG', 'index': 36}
{'word': 'A', 'score': 0.9764854907989502, 'entity': 'I-ORG', 'index': 39}
{'word': '##oint', 'score': 0.7803319692611694, 'entity': 'I-ORG', 'index': 41}
{'word': '##ments', 'score': 0.7828453779220581, 'entity': 'I-ORG', 'index': 42}
{'word': 'and', 'score': 0.9625542163848877, 'entity': 'I-ORG', 'index': 43}
{'word': 'Co', 'score': 0.9904180765151978, 'entity': 'I-ORG', 'index': 44}
{'word': '##mp', 'score': 0.9140805602073669, 'entity': 'I-ORG', 'index': 45}
{'word': '##ens', 'score': 0.8661588430404663, 'entity': 'I-ORG', 'index': 46}
{'word': '##ation', 'score': 0.9150537252426147, 'entity': 'I-ORG', 'index': 47}
{'word': 'Committee', 'score': 0.9888517260551453, 'entity': 'I-ORG', 'index': 48}
---Grouped entities---
{'entity_group': 'I-ORG', 'score': 0.9747593402862549, 'word': 'Faurecia'}
{'entity_group': 'I-ORG', 'score': 0.7050723830858866, 'word': 'Board of Directors'}
{'entity_group': 'I-ORG', 'score': 0.9805576602617899, 'word': 'Audit Committee'}
{'entity_group': 'I-ORG', 'score': 0.9823139011859894, 'word': 'Strategy Committee'}
{'entity_group': 'I-ORG', 'score': 0.9764854907989502, 'word': 'A'}
{'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': '##ointments and Compensation Committee'}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
For the first sentence (seq1) everything is fine. It's the example of the NER section under Usage section of the documentation : https://huggingface.co/transformers/usage.html#named-entity-recognition
With the other sentences we can see one example of each problem :
### Residual '##' in word pieces
```Python
{'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}
{'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}
{'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}
```
In seq 2, there is `'##S16'` as a word. Obviously, it should have been grouped with the precending entity and form `TS16` even maybe `'ISO / TS16949'` like this :
```Python
{'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO / TS16949'}
```
### [UNK] tokens in the `word` field
```Python
{'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot [UNK]'}
```
Because maybe of the ugly written Citroën which stands for Citroën. The entity found is `'PSA Peugeot [UNK]'`. In this case it would be better to just put `'PSA Peugeot'` if the last token is identified as [UNK] :
```Python
{'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot'}
```
### Syllables lost
For the last sentence we can see that 'Appointments and Compensation Committee' as be splitted into :
```Python
{'entity_group': 'I-ORG', 'score': 0.9764854907989502, 'word': 'A'}
{'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': '##ointments and Compensation Committee'}
```
instead of :
```Python
{'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': 'Appointments and Compensation Committee'}
```
The entity is not well grouped but more importantly the 'pp' is missing so even if we decided to blend the two groups we wouldn't get the real entity. This problem was first raised here : #4816. I've actually encountered this problem trying to fix the first one : I noticed some entity grouped like this, miss some syllables. The pipeline with `grouped_entity=False` already lost the 'pp' :
```Python
{'word': 'A', 'score': 0.9764854907989502, 'entity': 'I-ORG', 'index': 39}
{'word': '##oint', 'score': 0.7803319692611694, 'entity': 'I-ORG', 'index': 41}
{'word': '##ments', 'score': 0.7828453779220581, 'entity': 'I-ORG', 'index': 42}
```
It seems the way the pipeline blends each tokens is not ok because when I predict the label for each tokens with the code example of the documentation, I get this :
`[('[CLS]', 'O'), ('To', 'O'), ('prepare', 'O'), ('as', 'O'), ('best', 'O'), ('as', 'I-ORG'), ('possible', 'I-ORG'), ('the', 'I-ORG'), ('decisions', 'I-ORG'), ('falling', 'I-ORG'), ('under', 'I-ORG'), ('its', 'I-ORG'), ('responsibilities', 'O'), (',', 'O'), ('F', 'O'), ('##au', 'O'), ('##re', 'O'), ('##cia', 'O'), ('[UNK]', 'O'), ('s', 'O'), ('Board', 'O'), ('of', 'O'), ('Directors', 'O'), ('has', 'O'), ('set', 'O'), ('up', 'O'), ('three', 'O'), ('committees', 'O'), (':', 'O'), ('c', 'O'), ('Audi', 'O'), ('##t', 'O'), ('Committee', 'O'), (';', 'O'), ('c', 'O'), ('Strategy', 'O'), ('Committee', 'O'), (';', 'O'), ('c', 'O'), ('A', 'O'), ('##pp', 'O'), ('##oint', 'O'), ('##ments', 'O'), ('and', 'O'), ('Co', 'O'), ('##mp', 'O'), ('##ens', 'O'), ('##ation', 'O'), ('Committee', 'O'), ('.', 'O'), ('[SEP]', 'O')]`
There are those tokens :
`('A', 'O'), ('##pp', 'O'), ('##oint', 'O'), ('##ments', 'O')` for 'Appointments'
## Environment info
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0+cpu (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
EDIT : Typos | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5077/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5077/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5076/comments | https://api.github.com/repos/huggingface/transformers/issues/5076/events | https://github.com/huggingface/transformers/issues/5076 | 640,239,297 | MDU6SXNzdWU2NDAyMzkyOTc= | 5,076 | Colab session crashes on transformers | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The same thing for me. Colab (using TPU) crashes when importing transformers package.\r\n\r\n\r\n\r\n",
"Same issue.",
"I'm trying to reproduce, but can't manage to make colab crash. @khalilRhouma, @amitness, did you have similar code to @antoniomastro1996? Would it be possible for you to show me the code you used?",
"@LysandreJik yes, of course you can follow this:\r\nhttps://colab.research.google.com/drive/1jwXgtOXE8v8_qkiOCbjFQRFC5semK8T7?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
Hi everybody,
Something really strange happens in Colab after the recent updates of the DataCollate class.
I don't know if the two things are correlated, however, after I install the following packages
!git clone https://github.com/huggingface/transformers.git
!pip install ./transformers
!pip install -U nlp
and then I try to load them
import nlp
from transformers import T5Tokenizer
The Colab instance crashes.
Please find below the log.
**WARNING:root:kernel 485c962d-efa0-4103-9c68-ed22abd8839f restarted**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5076/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5076/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5075/comments | https://api.github.com/repos/huggingface/transformers/issues/5075/events | https://github.com/huggingface/transformers/issues/5075 | 640,229,333 | MDU6SXNzdWU2NDAyMjkzMzM= | 5,075 | Converting to ONNX doesn't apply to all models | {
"login": "alvations",
"id": 1050316,
"node_id": "MDQ6VXNlcjEwNTAzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvations",
"html_url": "https://github.com/alvations",
"followers_url": "https://api.github.com/users/alvations/followers",
"following_url": "https://api.github.com/users/alvations/following{/other_user}",
"gists_url": "https://api.github.com/users/alvations/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvations/subscriptions",
"organizations_url": "https://api.github.com/users/alvations/orgs",
"repos_url": "https://api.github.com/users/alvations/repos",
"events_url": "https://api.github.com/users/alvations/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvations/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Here is a related PyTorch issue: https://github.com/pytorch/pytorch/issues/32968.\r\n\r\nHere is a list of pretraineds model that can be exported to ONNX:\r\nbert-base-cased, distilbert-base-uncased, roberta-base, gpt2, distilgpt2, openai-gpt, albert-base-v2, xlnet-base-cased\r\n\r\nSo far, the following models have problem in exporting to ONNX:\r\nbart-large, transfo-xl-wt103, t5-base, xlm-mlm-en-2048\r\n\r\n\r\n\r\n",
"Anyone had any luck resolving this issue? The Pytorch issue linked above has a couple of potential workarounds. I notice it affects Pegasus as well",
"This pull request was closed by its author but doesn't raise the issue when I run convert on a Bart model. https://github.com/huggingface/transformers/pull/6334\r\n",
"Can a custom distilbert model which is saved in local disk be converted to onnx?",
"Can we convert Pix2Struct model into onnx i tried to do so manually it always show some error"
] | 1,592 | 1,691 | 1,599 | NONE | null | Is it possible to list or highlight which models are ONNX convertible?
On a fresh python environment:
```
python -m pip install -U transformers
python -m pip install mosestokenizer
```
The transformer version on the machine is `2.11.0`
When trying to convert the https://huggingface.co/transformers/model_doc/marian.html to ONNX, it throws the following error:
```
$ python convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-ROMANCE opus-mt-en-romance.onnx
Neither PyTorch nor TensorFlow >= 2.0 have been found. Models won't be available and only tokenizers, configurationand file/data utilities can be used.
ONNX opset version set to: 11
Loading pipeline (model: Helsinki-NLP/opus-mt-en-ROMANCE, tokenizer: Helsinki-NLP/opus-mt-en-ROMANCE)
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/tokenization_utils.py:828: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
warnings.warn(
stdbuf was not found; communication with perl may hang due to stdio buffering.
Error while converting the model: 'NoneType' object has no attribute 'from_pretrained'
```
Then after installing pytorch
```
$ python -m pip install -U pytorch
$ python convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-ROMANCE opus-mt-en-romance.onnx
ONNX opset version set to: 11
Loading pipeline (model: Helsinki-NLP/opus-mt-en-ROMANCE, tokenizer: Helsinki-NLP/opus-mt-en-ROMANCE)
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/tokenization_utils.py:828: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
warnings.warn(
stdbuf was not found; communication with perl may hang due to stdio buffering.
Downloading: 100%|██████████████████████████████| 312M/312M [01:59<00:00, 2.61MB/s]
Error while converting the model: Folder /Users/username/git-stuff/transformers/src/transformers is not empty, aborting conversion
```
Then after creating a new directory:
```
$ python ../convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-ROMANCE opus-mt-en-romance.onnx
ONNX opset version set to: 11
Loading pipeline (model: Helsinki-NLP/opus-mt-en-ROMANCE, tokenizer: Helsinki-NLP/opus-mt-en-ROMANCE)
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/tokenization_utils.py:828: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.
warnings.warn(
stdbuf was not found; communication with perl may hang due to stdio buffering.
Using framework PyTorch: 1.5.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Found output output_1 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:173: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not padding_mask.any():
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:590: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert embed_dim == self.embed_dim
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:591: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert list(query.size()) == [tgt_len, bsz, embed_dim]
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:633: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_weights.size() == (bsz * self.num_heads, tgt_len, src_len)
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:642: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert key_padding_mask is None or key_padding_mask.size()[:2] == (bsz, src_len,)
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:654: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (bsz * self.num_heads, tgt_len, self.head_dim)
/Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/onnx/utils.py:736: UserWarning: ONNX export failed on ATen operator triu because torch.onnx.symbolic_opset11.triu does not exist
warnings.warn("ONNX export failed on ATen operator {} because "
Error while converting the model: Exporting the operator triu to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator.
```
It is understandable that ONNX might not be supporting all models that are available through Huggingface's transformers.
**But is there a way to know which model(s) are ONNX convertible and which aren't?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5075/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5074/comments | https://api.github.com/repos/huggingface/transformers/issues/5074/events | https://github.com/huggingface/transformers/issues/5074 | 640,220,185 | MDU6SXNzdWU2NDAyMjAxODU= | 5,074 | how to get complete URLs to weights in 2.11.0 | {
"login": "allanj",
"id": 3351187,
"node_id": "MDQ6VXNlcjMzNTExODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3351187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allanj",
"html_url": "https://github.com/allanj",
"followers_url": "https://api.github.com/users/allanj/followers",
"following_url": "https://api.github.com/users/allanj/following{/other_user}",
"gists_url": "https://api.github.com/users/allanj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allanj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allanj/subscriptions",
"organizations_url": "https://api.github.com/users/allanj/orgs",
"repos_url": "https://api.github.com/users/allanj/repos",
"events_url": "https://api.github.com/users/allanj/events{/privacy}",
"received_events_url": "https://api.github.com/users/allanj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"They are the same as Roberta's \r\n\r\n```python\r\nvocab_url = \"https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json\"\r\nmerges_url = \"https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-merges.txt\"\r\n```\r\n\r\nIs your question more general than just `bart-large`? (Feel free to close if not :) )",
"Thanks and yes. I'm interested in a more general case to retrieve the URLs to weights for different models.\r\nCurrently, what I did actually is switching back to 2.10.0. And go to corresponding `modeling_xxx.py` and find the download link.",
"You can find URLs to specific files on each model page (click on \"List all files\"): https://huggingface.co/distilbert-base-cased#list-files",
"Hi @julien-c , as mentioned, I actually know this way, but \"list all files\" seems not giving me the `vocab.json`, `merge.txt`, OR are they not required? ",
"Not all tokenizer types use those files. For instance Wordpiece (e.g. bert) is just one vocab.txt file.",
"Yup. I understand that, just named an example. But just want to make sure that files in https://huggingface.co/distilbert-base-cased#list-files are sufficient for me to use. \r\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | # ❓ Questions & Help
Since it is related to the newest release, I would like to raise the question here. As our servers in the company are not able to access the HF url, we have to download the models locally and upload to the servers. Now seems I couldn't find the links to download `pytorch_model.bin`, `config.json`, `vocab.json`, `merge.txt`.
The only one I can find is https://huggingface.co/facebook/bart-large
But it only shows:
| File name | Last modified | File size|
|-- | -- | --|
|config.json | Fri, 24 Apr 2020 15:58:48 GMT | 1.2KB|
|pytorch_model.bin | Wed, 12 Feb 2020 19:53:45 GMT | 1.5GB|
|rust_model.ot | Sat, 25 Apr 2020 15:33:01 GMT | 1.9GB|
There is no `vocab.json`, `merge.txt`. I want to find the complete URLs to these files.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5073/comments | https://api.github.com/repos/huggingface/transformers/issues/5073/events | https://github.com/huggingface/transformers/pull/5073 | 640,181,307 | MDExOlB1bGxSZXF1ZXN0NDM1NjM1MDI4 | 5,073 | fix typo | {
"login": "timsuchanek",
"id": 1094804,
"node_id": "MDQ6VXNlcjEwOTQ4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1094804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timsuchanek",
"html_url": "https://github.com/timsuchanek",
"followers_url": "https://api.github.com/users/timsuchanek/followers",
"following_url": "https://api.github.com/users/timsuchanek/following{/other_user}",
"gists_url": "https://api.github.com/users/timsuchanek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timsuchanek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timsuchanek/subscriptions",
"organizations_url": "https://api.github.com/users/timsuchanek/orgs",
"repos_url": "https://api.github.com/users/timsuchanek/repos",
"events_url": "https://api.github.com/users/timsuchanek/events{/privacy}",
"received_events_url": "https://api.github.com/users/timsuchanek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=h1) Report\n> Merging [#5073](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e4aaa4580515446cd5a2972ab42fec0b95819c84&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5073 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 133 133 \n Lines 22146 22146 \n=======================================\n+ Hits 17110 17111 +1 \n+ Misses 5036 5035 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=footer). Last update [e4aaa45...c2c5e07](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for catching it."
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5073",
"html_url": "https://github.com/huggingface/transformers/pull/5073",
"diff_url": "https://github.com/huggingface/transformers/pull/5073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5073.patch",
"merged_at": 1592665204000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5072/comments | https://api.github.com/repos/huggingface/transformers/issues/5072/events | https://github.com/huggingface/transformers/issues/5072 | 640,071,313 | MDU6SXNzdWU2NDAwNzEzMTM= | 5,072 | 🐛 [TFTrainer] Wrong number of optimization steps | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You are right, the number of optimization steps is already fixed in an another PR (#5051). And indeed the batch size for TPUs has to take into account the gradient accumulation, but I put this for later because we don't have yet a proper way to differenciate a run on TPU/CPU/GPU. Also the `unbatch` then `batch` is not needed. Same thing apply later when computing the loss in the training step.",
"Thanks for your answer @jplu !\r\n\r\nCan I ask you clarification about this :\r\n\r\n>Also the unbatch then batch is not needed. Same thing apply later when computing the loss in the training step.\r\n\r\nWe still need the data to be batched based on the batch_size_per_device, not the total batch size, right ?\r\nIs the `floor` / `ceil` needed ?\r\n\r\nIs there any other changes I should apply locally to make it work ?",
"I'm pretty sure that the TPUs as to be set for the full size of batches (including those with the accumulation).\r\n\r\n| Is the floor / ceil needed ?\r\n`floor` and `ceil` are also needed in order to be sure you get the proper approximation.\r\n\r\n| Is there any other changes I should apply locally to make it work ?\r\n\r\nThere are certainly other changes to make, but I still have to figure out which one :smile: Don't hesitate to participate on the PR I have opened ^^",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🐛 Bug
It seems TFTrainer compute the wrong number of optimization steps :
https://github.com/huggingface/transformers/blob/e4aaa4580515446cd5a2972ab42fec0b95819c84/src/transformers/trainer_tf.py#L79
**It does not take into account the gradient accumulation**.
---
I believe this line should be changed to :
```
if self.args.dataloader_drop_last:
approx = math.floor
else:
approx = math.ceil
self.train_steps: int = approx(self.num_train_examples / (self.args.train_batch_size * self.args.gradient_accumulation_steps))
```
---
Also, on TPU `drop_remainder` should be called on gradient accumulation steps as well.
https://github.com/huggingface/transformers/blob/e4aaa4580515446cd5a2972ab42fec0b95819c84/src/transformers/trainer_tf.py#L81-L86
Should be changed to :
```
ds = (
self.train_dataset.cache()
.shuffle(self.num_train_examples)
.batch(self.args.train_batch_size * self.args.gradient_accumulation_steps, drop_remainder=self.args.dataloader_drop_last)
.unbatch()
.batch(self.args.train_batch_size)
.prefetch(tf.data.experimental.AUTOTUNE)
)
```
---
@jplu Maybe to add in #5065 ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5071/comments | https://api.github.com/repos/huggingface/transformers/issues/5071/events | https://github.com/huggingface/transformers/issues/5071 | 640,056,524 | MDU6SXNzdWU2NDAwNTY1MjQ= | 5,071 | glue.py Data Processor Index Error for Large Data | {
"login": "dr-aheydari",
"id": 36649087,
"node_id": "MDQ6VXNlcjM2NjQ5MDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/36649087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dr-aheydari",
"html_url": "https://github.com/dr-aheydari",
"followers_url": "https://api.github.com/users/dr-aheydari/followers",
"following_url": "https://api.github.com/users/dr-aheydari/following{/other_user}",
"gists_url": "https://api.github.com/users/dr-aheydari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dr-aheydari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dr-aheydari/subscriptions",
"organizations_url": "https://api.github.com/users/dr-aheydari/orgs",
"repos_url": "https://api.github.com/users/dr-aheydari/repos",
"events_url": "https://api.github.com/users/dr-aheydari/events{/privacy}",
"received_events_url": "https://api.github.com/users/dr-aheydari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Ali!\r\nBased on your error info, it seems that a line in your `train.tsv` does not follow the MRPC format. Or more specifically, that line may contain an empty field. Could you add a try-except to catch and print that line? It'll look like:\r\n\r\n```python\r\ntry:\r\n text_a = line[3]\r\nexcept:\r\n print(line)\r\n```",
"@JetRunner \r\nThank you very much for your suggestion. I found that there was a weird entry (that was not NaN nor empty) in some column of my data, and removing it with some modifications fixed the issue. \r\n\r\nThank you again for your clear explanation and suggestion. "
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
I am using `bert-base-uncase` and `roberta-base` for sentence pair classification, following the same exact example concept and data processing as MRPC. I have a large collection of private data that is in the _exact same format of MRPC_, and I am using `run_glue.py`as a base code for running my data.
Everything works great when I have smaller data files (between 10K-200k samples) but the issue happens when I increase the data size (300K and above); I am not sure if the issue is a bug or a memory issue. Here is the issue that happens:
```
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 200, in _create_examples
text_a = line[3]
IndexError: list index out of range
```
The data, the format and the values are consistent (I am not feeding in an empty file). If I concatenated the data in smaller samples works just fine. Is this an issue with the data processor or some sort of memory issue? it is perhaps important to note that I have not had any issues with using this large dataset for training different models.
## To reproduce
Steps to reproduce the behavior:
1. Have a _title-pair data_ in the exact format of MRPC that is large, say larger than 300K
2. Run the `run_glue.py` with MRPC as the task
and you should get the following error:
## The Error
```
Traceback (most recent call last):
File "run_glue.py", line 262, in <module>
main()
File "run_glue.py", line 140, in main
train_dataset = GlueDataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/datasets/glue.py", line 118, in __init__
examples = self.processor.get_train_examples(args.data_dir)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 179, in get_train_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 200, in _create_examples
text_a = line[3]
IndexError: list index out of range
```
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-4.14.171-105.231.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0 (True)
Thank you very much for your time and help, in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5071/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5070/comments | https://api.github.com/repos/huggingface/transformers/issues/5070/events | https://github.com/huggingface/transformers/issues/5070 | 639,973,345 | MDU6SXNzdWU2Mzk5NzMzNDU= | 5,070 | Errors while running pytest | {
"login": "archanray",
"id": 22999839,
"node_id": "MDQ6VXNlcjIyOTk5ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22999839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/archanray",
"html_url": "https://github.com/archanray",
"followers_url": "https://api.github.com/users/archanray/followers",
"following_url": "https://api.github.com/users/archanray/following{/other_user}",
"gists_url": "https://api.github.com/users/archanray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/archanray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/archanray/subscriptions",
"organizations_url": "https://api.github.com/users/archanray/orgs",
"repos_url": "https://api.github.com/users/archanray/repos",
"events_url": "https://api.github.com/users/archanray/events{/privacy}",
"received_events_url": "https://api.github.com/users/archanray/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you provide the versions of your software, like asked in the issue template?\r\n\r\nNamely tansformer version, python version, pytorch version, tensorflow version ...",
"updated",
"I still don't know what's your transformer version, which is arguably the most important version of the list :sweat_smile: \r\n\r\nDo you mind running `transformers-cli env` in your environment? It should output something along the lines of:\r\n\r\n```\r\n- `transformers` version: 2.11.0\r\n- Platform: Linux-5.6.15-1-MANJARO-x86_64-with-arch-Manjaro-Linux\r\n- Python version: 3.6.10\r\n- PyTorch version (GPU?): 1.5.0 (False)\r\n- Tensorflow version (GPU?): 2.2.0 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"apologies :( , updated again.",
"I met the same error with the same environment.\r\n",
"@LousyLory Hi, have you solved this problem?\r\nI have try python3.6/3.7 + torch 1.5/1.4 in venv/conda.\r\nAnd I have checked my environment using `transformers-cli env`, which output same with @LousyLory .\r\nAll of them fail the test.\r\nFor conda + python3.6 + torch 1.4:\r\nthe failure case is: (The other cases are just like this)\r\n```\r\n======================================================================================================================== short test summary info =========================================================================================================================\r\nFAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cud...\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cud...\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [7, 2, 32], but expected [7, 4, 32] (gather at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cuda/comm.c...\r\nFAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cud...\r\nFAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0.\r\n================================================================================================= 5 failed, 1054 passed, 560 skipped, 302 warnings in 253.59s (0:04:13) ==================================================================================================\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
Hi!
I followed the installation procedures from source for a conda environment.
`make test-examples` returns with the following error:
```
FAILED examples/token-classification/test_ner_examples.py::ExamplesTests::test_run_ner - AssertionError: 2.1329751014709473 not less than 1.5
FAILED examples/test_examples.py::ExamplesTests::test_run_glue - AssertionError: 0.5 not greater than or equal to 0.75
```
I also tried `make test` but get the following errors:
```
FAILED tests/test_modeling_distilbert.py::DistilBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0.
FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather input tensors must have the same number of dimensions: got 1, ...
FAILED tests/test_modeling_roberta.py::RobertaModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0.
FAILED tests/test_modeling_bert.py::BertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0.
FAILED tests/test_modeling_albert.py::AlbertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0.
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0.
```
I followed the setup steps mentioned in the original readme. I am running the tests in a 8 GPU machine.
Please let me know how to fix this.
output of `trasnformers-cli env`
```
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-1023-aws-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
Thanks!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5069/comments | https://api.github.com/repos/huggingface/transformers/issues/5069/events | https://github.com/huggingface/transformers/pull/5069 | 639,960,176 | MDExOlB1bGxSZXF1ZXN0NDM1NDU2ODUw | 5,069 | Typo | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5069/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5069",
"html_url": "https://github.com/huggingface/transformers/pull/5069",
"diff_url": "https://github.com/huggingface/transformers/pull/5069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5069.patch",
"merged_at": 1592340381000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5068/comments | https://api.github.com/repos/huggingface/transformers/issues/5068/events | https://github.com/huggingface/transformers/pull/5068 | 639,953,562 | MDExOlB1bGxSZXF1ZXN0NDM1NDUxMzYz | 5,068 | Fix all sphynx warnings | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=h1) Report\n> Merging [#5068](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/439aa1d6e9c953069f75fc23c737221d0df2c977&el=desc) will **increase** coverage by `0.98%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5068 +/- ##\n==========================================\n+ Coverage 76.45% 77.43% +0.98% \n==========================================\n Files 130 130 \n Lines 22024 22024 \n==========================================\n+ Hits 16839 17055 +216 \n+ Misses 5185 4969 -216 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `92.85% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.00% <ø> (ø)` | |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <ø> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.19% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `91.28% <ø> (ø)` | |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=footer). Last update [439aa1d...6a52a4b](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This PR touches a lot of files but nothing that should break anything, it's just there to fix all sphynx warnings. Why? Well some of them are just there to annoy us and don't have any effect, but for roughly half of them, there is something wrong going on in the docs so it's best to fix them all, especially to make it easier to spot new warnings introduced when writing new docs.
The only thing that is a real change is that I removed `members` in the `AdamWeightDecay` because it was documenting its `apply_gradients` method using the docstring from keras which is not sphynx-compatible (it was not rendering properly in our docs). If we really want that method documented (it was the only one), I can rewrite the docstring. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5068/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5068",
"html_url": "https://github.com/huggingface/transformers/pull/5068",
"diff_url": "https://github.com/huggingface/transformers/pull/5068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5068.patch",
"merged_at": 1592340603000
} |
https://api.github.com/repos/huggingface/transformers/issues/5067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5067/comments | https://api.github.com/repos/huggingface/transformers/issues/5067/events | https://github.com/huggingface/transformers/issues/5067 | 639,921,212 | MDU6SXNzdWU2Mzk5MjEyMTI= | 5,067 | Modify BERT/BERT-descendants to be TorchScript-able (not just traceable) | {
"login": "sbrody18",
"id": 67021628,
"node_id": "MDQ6VXNlcjY3MDIxNjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/67021628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbrody18",
"html_url": "https://github.com/sbrody18",
"followers_url": "https://api.github.com/users/sbrody18/followers",
"following_url": "https://api.github.com/users/sbrody18/following{/other_user}",
"gists_url": "https://api.github.com/users/sbrody18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbrody18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbrody18/subscriptions",
"organizations_url": "https://api.github.com/users/sbrody18/orgs",
"repos_url": "https://api.github.com/users/sbrody18/repos",
"events_url": "https://api.github.com/users/sbrody18/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbrody18/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! This is interesting. Could you resume what are the changes that would be needed in order to have our models scriptable?",
"Sure, mostly my changes fall into these categories:\r\n\r\n### 1. Class members can only be basic types, None, nn.Modules, or list or tuple thereof\r\n- Solution: don't save whole config in the model, only individual entries you need, which are basic types\r\n- Solution for nn.functional: use the nn.Module equivalent of nn.functional\r\n- Solution for other functions: define and call the function globally, not as a class member\r\n\r\n### 2. Inputs are assumed to be Tensors\r\n- Solution: use typing to tell TorchScript the types (note - requires typing to be supported. I checked in python 3.7, but not 3.5 or 3.6)\r\n\r\n### 3. TorchScript can't figure out that an Optional is not None\r\n- Solution: add assertions to help TorchScript\r\n\r\n### 4. Variable types are not allowed to change depending on conditionals\r\n- Solution: use consistent types (with Optional to tell TorchScript that a variable/argument can be None) - this is where I had to change the interface, since current BERT models can optionally return attention probabilities. Had to change so that they always return the same sized output tuple, with None values, instead).\r\n\r\n### 5. TorchScript can't handle the expand (*) operator on lists\r\n- Solution: explicitly enumerate the arguments\r\n\r\n### 6. You can't use nn.Modules as local variables (take variable number of args)\r\n- Solution: use the nn.functional equivalents of the modules.\r\n\r\n### 7. TorchScript doesn't know about nn.ModuleList's enumerate \r\n- Solution: use a regular for loop\r\n\r\nMost of these are pretty small changes and do not affect the logic. #4 and #1c can be tricky, and #5 might be an issue with recent changes made here: https://github.com/huggingface/transformers/pull/4874",
"Hi @sbrody18, \r\n\r\nThanks for opening this issue and taking the time to dive into our TorchScript support.\r\n\r\nRegarding **_A scriptable model would allow for variable-length input, offering big speedup gains and simplification_**:\r\n\r\nDo you have some numbers to compare against the current transformers library? We ran some TorchScript tests and the differences where not that huge at that time, may be this has changed since? I (and probably others) would be very interested in knowing more on this aspect.\r\n\r\nRegarding the list of changes you suggested: \r\n\r\nI'm currently not really in favour of such changes as they are almost changing all the way the library is designed and would have an impact on all the models. Some of them might be further discussed if there are real performance benefits.",
"Hi @mfuntowicz,\r\nMy co-workers and I have run the experiments that show that inference time scales more-or-less linearly with the input size (also supported in the linked article below).\r\n\r\nAssuming you are trying to run in C++ (which is the reason to use TorchScript), the current solution, using `trace()` means that you can only use fixed length input - you have to set a large value for max_length to support your longest expected input, and zero-pad all input to the max-length.\r\nThat means if your max_length is 1000 tokens and your average length is 20 tokens, your inference is taking 50x longer than it should.\r\nYou can see an example of how big a difference this makes, [here](https://medium.com/roblox-tech-blog/how-we-scaled-bert-to-serve-1-billion-daily-requests-on-cpus-d99be090db26), under 'Scenario #3: Smaller Inputs (Dynamic Shapes)'.\r\n\r\nI'm guessing the tests you ran were focused specifically on the technical behavior of the models on a fixed input set and didn't take into account the max-length issue. Also, this is only an issue if you need to use TorchScript in order to run in C++.\r\n\r\nRe. the change to design, my intention is to keep the model changes to a minimum (e.g., adding type hints and asserts does not change the design at all) and make sure they are fully backwards compatible. There would still be some changes required, but I don't think they are drastic.\r\n\r\nAs I said in the original post, I have a PR where I did a lot of the work, and I'd be happy to work with someone to figure out how to get it to a state where it can be merged.",
"@sbrody18 do you mind sharing your fork ? ",
"Yes, I can do so, but it may have to wait a week or two - things are busy at the moment.",
"I am very interested in this work as well. Our team would like to be able to use TorchScript so we can train without depending on Python. If there's any way I can be of help, I would gladly offer some time here!",
"Sorry for the delay. I hope to have a reasonable PR later this week.",
"My change is available at https://github.com/sbrody18/transformers/tree/scripting\r\n\r\nNote that it is based off of a commit from earlier this month:\r\nhttps://github.com/huggingface/transformers/compare/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d...sbrody18:scripting\r\nSince then there have been changes made to the BertModel interface adding a return_tuple argument and changing the return type of the forward method, and this would require more effort to resolve.\r\n\r\nI listed the principles I used in https://github.com/huggingface/transformers/issues/5067#issuecomment-644989375. The original components tended to return different sized tuples, depending on arguments, which is problematic for TorchScript. When a component BertX required an interface change to be scriptable, I made a BertScriptableX version with the modifications, and had the BertX component inherit from it and just modify the output so it is compatible with the original API.\r\n\r\nI made scriptable versions of BertModel and all the BertFor\\<Task\\> classes, except BertForMaskedLM (some complexities there were too much work for a proof of concept).\r\nI added a [test](https://github.com/sbrody18/transformers/blob/scripting/tests/test_modeling_bert.py#L529) to demonstrate the scripting capability.\r\n\r\nNote that my change disables the [gradient_checkpoint path](https://github.com/sbrody18/transformers/blob/scripting/src/transformers/modeling_bert.py#L474-492) in the encoder. I think this can be resolved, but I didn't have the time to work on it.",
"@sgugger @joeddav: see comment above for preliminary PR. \r\nProbably too big and complicated to try to merge as is, but would be happy to work with someone to break things down into reasonable chunks.",
"Thanks for all the work. Looking at this and our recent changes in the model API (in particular the return_dict argument) I think we probably won't be able to have the models be fully compatible with TorchScript. What is possible however would be to have a second version of the models that don't have the option of return_dict (we can also remove output_hiddens/output_attentions if it makes life easier) and would be fully scriptable.\r\n\r\nSince you already started with some components in a different class, I think we should have two models (let's say `BertModel` and `ScriptableBertModel`) with the same named parameters so you can seemlessly save/load from one to the other (a workflow would then be to experiment with `BertModel`, save the fine-tuned model and then go to `ScriptableBertModel` for inference for instance).\r\n\r\nThen I'm not sure what's easiest:\r\n- have the two inherit from some base class and have a minimal of methods that need to be different (probably just the forward?)\r\n- or have the second class be a complete rewrite.\r\n\r\nI think we should focus on having a proof of concept on one model before moving forward with others. ",
"That makes sense to me. It will probably result in some amount of code duplication, and we'd need to make sure we keep the named parameters in sync, but probably easier to maintain.\r\nSo would you suggest the ScriptableBertModel is a separate file?",
"Not necessarily a separate file, I guess it depends on the amount of code to rewrite. I think we can worry about this in a second stage, once we have a good poc.",
"@sgugger Please see POC implementation in PR above.",
"@sbrody18 in the original PR https://github.com/huggingface/transformers/pull/6846 you created for this issue, you mentioned you saw a large perf increase with dynamic sequences. What did you use as a test to make that determination?",
"@kevinstephano - see discussion and conclussions [here](https://github.com/huggingface/transformers/pull/6907#issuecomment-687343119)\r\nWe saw a large perfomance increase with an older version of PyTorch, where traced models required the input to be the same length as the one used for tracing, making it necessary to pad short sequences at inference, and adding a lot of unnecessary computation overhead.\r\nWith recent versions of PyTorch (>=1.3, I think), this is no longer the case.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\n\r\nI just tried `jit.script()`ing using Bert from the PR (just copied modeling_bert, modeling_utils and replaced relative imports of other dependencies with imports from transformers master branch)\r\nI see there are try blocks left in the code, which cause `jit.script` to fail:\r\n```\r\nUnsupportedNodeError: try blocks aren't supported:\r\n File \"/zhome/1d/8/153438/experiments/master-thesis/export_model/modeling_utils_script_proof.py\", line 131\r\n Get torch.device from module, assuming that the whole module has one device.\r\n \"\"\"\r\n try:\r\n ~~~ <--- HERE\r\n return next(self.parameters()).device\r\n except StopIteration:\r\n```\r\n\r\n@sbrody18 how did you export the model? I guess the workaround would be to remove try blocks, but apparently it did work for you as it is.\r\n",
"@fteufel you can see #6846 for a stand-alone implementation that worked **at a previous version of the transformers library**. Maybe that's good enough for your purposes?\r\nThe transformers library has changed significantly since these PRs and I'm not sure if that try was added. If you are using code from the transformers master branch in the model itself, it's likely you will encounter several unscriptable bits.\r\nSpecifically for the next function, you can either:\r\na. remove the try block, since there should always be at least one parameter on the model\r\nb. use the next with default:\r\n first_param = next(self.parameters(), None)\r\n if not first_param: <handle it>\r\n return first_param.device\r\nc. figure out a better way to decide the model device :)",
"@sbrody18 It seems have not been merged to official transformers ? My transformers Version: 4.21.3, and it can not use `jit.script` to convert BERT model to TorchScript.\r\n"
] | 1,592 | 1,667 | 1,609 | NONE | null | # 🚀 Feature request
Modify BERT models (src/transformers/modeling_bert.py) to conform to TorchScript requirements, so they can be ``jit.script()``-ed, not just ``jit.trace()``-ed (as is [currently the only supported option](https://huggingface.co/transformers/torchscript.html))
*Note:* I have a working version implementing this, which I would like to contribute.
See below.
## Motivation
A scriptable model would allow for variable-length input, offering big speedup gains and simplification (no need to create different models for different input lengths).
In addition, it would avoid other potential pitfalls with tracing (e.g., code paths that are input dependent and not covered by the tracing example input).
Related issues:
https://github.com/huggingface/transformers/issues/2417
https://github.com/huggingface/transformers/issues/1204
possibly also
https://github.com/huggingface/transformers/issues/1477
https://github.com/huggingface/transformers/issues/902
## Your contribution
I have a working PR that modifies all the models in src/transformers/modeling_bert.py and makes them TorchScript-able. I have not tested it on other models that use BERT components (e.g., albert), but it should be possible to expand the capability to those, as well.
However, it would require some significant work to make it ready for submission: besides formatting, documentation, testing etc., my current version changes the method signatures, and I would need to avoid that to maintain backward-compatibility.
Before putting in that work, I'd like to make sure that such a PR is something you'd be interested in and would be willing to merge in, assuming it meets the requirements.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5067/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5066/comments | https://api.github.com/repos/huggingface/transformers/issues/5066/events | https://github.com/huggingface/transformers/issues/5066 | 639,915,872 | MDU6SXNzdWU2Mzk5MTU4NzI= | 5,066 | Tokenization+Transformers works with PyTorch but not TensorFlow on TPU | {
"login": "oja",
"id": 5075260,
"node_id": "MDQ6VXNlcjUwNzUyNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5075260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oja",
"html_url": "https://github.com/oja",
"followers_url": "https://api.github.com/users/oja/followers",
"following_url": "https://api.github.com/users/oja/following{/other_user}",
"gists_url": "https://api.github.com/users/oja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oja/subscriptions",
"organizations_url": "https://api.github.com/users/oja/orgs",
"repos_url": "https://api.github.com/users/oja/repos",
"events_url": "https://api.github.com/users/oja/events{/privacy}",
"received_events_url": "https://api.github.com/users/oja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"This makes sense! Do you have some examples of some tokenization that can happen on TPUs with TensorFlow, so that we may consider what we have to do to enable this?",
"I am not totally sure, but one thing to look at would probably be TF's native `tf.keras.preprocessing.text.Tokenizer`, which (I think) works when used within tf.data.Dataset maps on a TPU. \r\n\r\nhttps://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer",
"Assigning myself to keep this in mind",
"\r\n> I am not totally sure, but one thing to look at would probably be TF's native `tf.keras.preprocessing.text.Tokenizer`, which (I think) works when used within tf.data.Dataset maps on a TPU.\r\n> \r\n> https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer\r\n\r\nInteresting. Is there a way to translate HF's tokenziers to this?\r\n\r\nI am wondering it's close to being possible that we don't need to convert text to Ids on the HF's data processors. ",
"TensorFlow 2.3.x adds the Sentencepiece tokenizer to `tensorflow_text`. You can use this short script to turn a HuggingFace tokenizer into a `tensorflow_text.SentencepieceTokenizer`: https://gist.github.com/noahtren/6f9f6ecf2f81d0975c4f54afaeb95318\r\n\r\nI tested it on TPU and it's been working for me. My experience is that HuggingFace tokenizers are wrappers for https://github.com/google/sentencepiece so it's really simple to make it compatible with TensorFlow graph mode. Not sure yet if this works for all huggingface pretrained tokenizers.",
"Did you try on the hf rust tokenizers as well? ",
"@Santosh-Gupta No, I haven't tried that",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,605 | 1,605 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. In order to use Huggingface's tokenizer with TensorFlow's data pipeline (`tf.data.Dataset.map()`), the tokenize function must be wrapped in `tf.py_function()` or `tf.numpy_function()` (issue #3851).
2. Neither of these functions are not supported on TPU (tensorflow/tensorflow#30818).
3. This makes it impossible to run Huggingface transformers + tokenizer on a TPU using TensorFlow
4. However, the same tokenization on a TPU works under PyTorch (!)
One workaround is to do tokenization before entering the TF data pipeline, but unfortunately my dataset is too large for that.
Example of code that is necessary, but fails on a TPU:
```
def tokenize_encode_map_fn(text):
encoded = tf.py_function(tokenize_encode, # tokenize_encode is a wrapper around the Huggingface tokenizer and encoder
inp=[text, hypothesis],
Tout=[tf.int32, tf.int32])
return {"input_ids": encoded[0], "attention_mask": encoded[1]}
tf_dataset.map(map_fn)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Be able to run tokenizer on a TPU with TensorFlow, like you can in PyTorch
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers-2.11.0
- Platform: Google Colab TPU
- Python version: Python 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): TensorFlow 2.2.0 / TPU
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5066/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5066/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5065/comments | https://api.github.com/repos/huggingface/transformers/issues/5065/events | https://github.com/huggingface/transformers/pull/5065 | 639,879,912 | MDExOlB1bGxSZXF1ZXN0NDM1MzkwNTg3 | 5,065 | [WIP] TF Trainer with TPUs | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=h1) Report\n> Merging [#5065](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fc24a93e6493c2689e5585d12b7c43730ad9b3ea&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `12.28%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5065 +/- ##\n==========================================\n- Coverage 79.02% 78.96% -0.06% \n==========================================\n Files 138 138 \n Lines 24064 24089 +25 \n==========================================\n+ Hits 19017 19023 +6 \n- Misses 5047 5066 +19 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.92% <8.00%> (-0.77%)` | :arrow_down: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `53.19% <42.85%> (+2.02%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.92% <0.00%> (+0.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=footer). Last update [fc24a93...9326d27](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@Colanim Can you summarize your findings here? It would be better suited than going across multiple issues.",
"@jplu Sure !\r\n\r\n* [x] Optimizer was passed as arguments to `_step` function, but we can pass only Tensors (issue #4994). This is fixed by ef02be8.\r\n\r\n* [x] Optimization steps were not computed correctly : it was ignoring gradient accumulation (issue #5072). This is fixed by a897e09.\r\n\r\n* [x] Computation of number of steps should take into account `dataloader_drop_last` (mentioned in #5072), and use `math.floor` in that case (instead of `math.ceil`). Fixed in #5051\r\n\r\n* [ ] I'm still having error as described in #4996, but I didn't figure the reason yet. Maybe it's my model that has a problem, not `TFTrainer`.",
"Thanks for having sum up the issues here.\r\n\r\n> * [ ] Computation of number of steps should take into account `dataloader_drop_last` (mentioned in #5072), and use `math.floor` in that case (instead of `math.ceil`). Not fixed yet.\r\n\r\nThis is fixed in PR #5051.\r\n\r\n> * [ ] I'm still having error as described in #4996, but I didn't figure the reason yet. Maybe it's my model that has a problem, not `TFTrainer`.\r\n\r\nThis is what I'm currently looking for, but still not figuring out why :/\r\n\r\nWhich TPUs are you using, over ctpu, Cloud AI or Colab?",
"> Which TPUs are you using, over ctpu, Cloud AI or Colab?\r\n\r\nI'm currently using ctpu, but I could see similar issue when using Colab.",
"For me, this is the problem : https://github.com/huggingface/transformers/blob/5f721ad6e48c9d846de25c3fefa0e50a306cbf10/src/transformers/trainer_tf.py#L388-L389\r\n\r\nSomehow the Exception is not catched on TPU, which crash the training. Using `max_steps` argument instead of `num_train_epochs` fix the problem because we repeat the dataset, and therefore we never have an out of range error.\r\n\r\nIn the eval code, since we iterate the dataset without repeat, it's causing the same problem.",
"Can I see all your logs output during the training?",
"What is the command line you are using to create your TPU and run the process? In order to be aligned with the same errors :)",
"@jplu finally I was wrong : even when using `max_steps` I'm having the problem.\r\n\r\nIt always happen at the beginning of the end of first epoch (for both validation and training). I think it's due to how the dataset is iterated. \r\n\r\nAccording to [this link](https://www.kaggle.com/mgornergoogle/custom-training-loop-with-100-flowers-on-tpu), we should have a single iterator for the whole training procedure (iterator should not be reset after each epoch), and always `repeat` the dataset indefinitely. Even for validation dataset.\r\n\r\nUnfortunately I'm having trouble with GCP right now, I can't try things on my end..",
"Ok, can you share your command lines here please. Because even the size computation cannot be run on TPU with TF 2.2.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,600 | 1,599 | CONTRIBUTOR | null | This PR is to make the TF trainer fully compliant with the TPUs.
Should fix #5042, #4996 and #4994. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5065/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5065",
"html_url": "https://github.com/huggingface/transformers/pull/5065",
"diff_url": "https://github.com/huggingface/transformers/pull/5065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5065.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5064/comments | https://api.github.com/repos/huggingface/transformers/issues/5064/events | https://github.com/huggingface/transformers/pull/5064 | 639,836,531 | MDExOlB1bGxSZXF1ZXN0NDM1MzU1MjIy | 5,064 | Reorganize documentation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=h1) Report\n> Merging [#5064](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d5477baf7d87b9bdad386f2f317732b85277b06b&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5064 +/- ##\n==========================================\n- Coverage 77.41% 77.36% -0.06% \n==========================================\n Files 130 130 \n Lines 22023 22023 \n==========================================\n- Hits 17050 17037 -13 \n- Misses 4973 4986 +13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=footer). Last update [d5477ba...d7a3d5d](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This PR does two things:
- reorganize the doc topics in five sections
- complete the model list in the index
Also I cut all lines at 119 chars (like the code) otherwise it's not readable in visual studio code (and I imagine other viewers), removed the stars from authors since it wasn't pointing to anything (can add them back but we should explain what they mean in that case) and made all authors list comma-separated with one last 'and'. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5064",
"html_url": "https://github.com/huggingface/transformers/pull/5064",
"diff_url": "https://github.com/huggingface/transformers/pull/5064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5064.patch",
"merged_at": 1592394921000
} |
https://api.github.com/repos/huggingface/transformers/issues/5063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5063/comments | https://api.github.com/repos/huggingface/transformers/issues/5063/events | https://github.com/huggingface/transformers/issues/5063 | 639,812,340 | MDU6SXNzdWU2Mzk4MTIzNDA= | 5,063 | Non-deterministic training issue on GPU: TF-BERT | {
"login": "MFreidank",
"id": 6368040,
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFreidank",
"html_url": "https://github.com/MFreidank",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! [This StackOverflow issue](https://datascience.stackexchange.com/questions/14812/making-keras-tensorflow-code-execution-deterministic-on-a-gpu) might be of interest to you.\r\n\r\nNamely:\r\n\r\n> In fact, the randomness(non-determinstic) is a behavior of GPU.\r\n> \r\n> The reason behind is that cuDNN(and othere CUDA stuffs) uses a non-deterministic algorithm to compute gradients, thus we can't determine anything.",
"Note that this issue is being addressed as an [issue in the tensorflow-determinism repo](https://github.com/NVIDIA/tensorflow-determinism/issues/19). I have also added a reference to that repo in the above-mentioned Stack Exchange / Data Science question.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
TF-BERT
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
SST-2
* [ ] my own task or dataset: (give details below)
## To reproduce
In spite of combining learnings from:
* [the "complete recipe" in NVIDIA's slides from gputechconf](https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9911-determinism-in-deep-learning.pdf)
* [a recently suggested workaround](https://github.com/tensorflow/tensorflow/issues/38185#issuecomment-643014439) for non-determinism issues with crossentropy loss
... I am still arriving at the following [short, non-deterministic colab notebook example](https://colab.research.google.com/drive/1VSU8lYFD0E1HKZrIL1MvyIRwAktlSF_t?usp=sharing) to train BERT .
My results for the sum of model weights (as computed with [this suggested function](https://github.com/NVIDIA/tensorflow-determinism/issues/2#issuecomment-548210203)) after training **for only 5 steps** is (differences are **`highlighted`** below):
| | Device | Before training | After training |
| ------------- | ------------- | ------------- | ------------- |
| Run 1 | GPU | -641227.5609667897224 | -641237.442 **`5159916282`** |
| Run 2 | GPU | -641227.5609667897224 | -641237.442 **`3093758523`** |
| | | | |
| Run 1 | CPU | -641227.5609667301178 | -641238.1506845243275 |
| Run 2 | CPU | -641227.5609667301178 | -641238.1506845243275 |
This variance gets increasingly more pronounced when the model is trained for longer periods of time.
I am expecting a general problem with the computational graph with BERT introducing non-determinism. As a result, this could affect a large part of the huggingface community.
Please keep in mind that determinism is of key importance in certain industries and also a pre-requisite for reproducible research.
Could you please help identify the source of non-determinism and provide guidance on how we can resolve it?
Steps to reproduce the behavior:
1. Execute colab notebook above on the GPU runtime using Tensorflow 2.2.0, observe non-deterministic behavior
2. Execute colab notebook above on the CPU runtime using Tensorflow 2.2.0, observe deterministic behavior
## Expected behavior
Training should be deterministic both on GPU and CPU runtime for TF 2.2.0.
## Environment info
* tensorflow==2.2.0
* nlp==0.2.1
- `transformers` version: 2.11.0
- Platform: Linux, Ubuntu 18.04.3 LTS bionic
- Python version: 3.6.9
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): 2.2.0-gpu
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5063/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5062/comments | https://api.github.com/repos/huggingface/transformers/issues/5062/events | https://github.com/huggingface/transformers/issues/5062 | 639,784,991 | MDU6SXNzdWU2Mzk3ODQ5OTE= | 5,062 | What do the following parameters mean during the initialization of T5 model? | {
"login": "abhisheknovoic",
"id": 62595485,
"node_id": "MDQ6VXNlcjYyNTk1NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheknovoic",
"html_url": "https://github.com/abhisheknovoic",
"followers_url": "https://api.github.com/users/abhisheknovoic/followers",
"following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheknovoic/orgs",
"repos_url": "https://api.github.com/users/abhisheknovoic/repos",
"events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheknovoic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I added a description of `d_kv`. `d_model` is as you say the size of the embedding. `d_kv` is the size of the key, query, value projections. \r\n\r\nIf you look at this blog post: http://jalammar.github.io/illustrated-transformer/\r\n\r\n`d_model` corresponds to the size of a vector `x_1` and `d_kv` of a vector `q_1, k_1, v_1`"
] | 1,592 | 1,592 | 1,592 | NONE | null | Hello,
I am aware of the general Transformer model and I believe it is the same model used in T5 architecture.
I know we have the input vocab_size which is the total number of vocabulary size. Besides this, the important parameters would be the embedding size (the size of the embedding of each token), the number of layers, the number of heads and others.
Particularly, looking at the T5Config class that is used to initialize a T5 model, what is the d_kv and d_model parameter?
Is d_model the size of the embedding and in which case what is the d_kv? The docstring is not really clear for me.
Thanks for your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5062/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5061/comments | https://api.github.com/repos/huggingface/transformers/issues/5061/events | https://github.com/huggingface/transformers/issues/5061 | 639,767,976 | MDU6SXNzdWU2Mzk3Njc5NzY= | 5,061 | More flexible wandb support for Trainer | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Yeah I agree with the motivations here. From my experience I like to initialize wandb as soon as the main script starts which has benefits like \r\n- Capturing all the console logs, so if something failed before the execution reached the trainer class one could debug the issue through Wandb. This is specially useful for doing training in a kubernetes environment where the logs are not very easily available after a crash.\r\n- Instantiating at the beginning also allows us to capture all the CLI arguments properly before they might have been modified. This is a problem right now because Wandb only captures the training_args leaving out model_args or data args. This is important for replicating a training run in the future.",
"Doesn't it work already when initializing wandb outside?\r\n\r\nI believe that `wandb.init` does not create a new run if one is already running so all the functions should be available anywhere in the script.",
"When looking over the code, I was pretty sure it will initialize a new run. However, I checked and everything (mentioned in the issue) works smoothly. ",
"Ok, thanks for checking @Guitaricet "
] | 1,592 | 1,593 | 1,593 | NONE | null | # 🚀 Feature request
**A.** Make it possible to initialize wandb outside Trainer class.
**B.** Add `use_wandb` argument to the Trainer arguments.
## Motivation
**A.** Currently, wandb inside Trainer configuration if very limited. There are only three environment variables `WANDB_WATCH`, `WANDB_PROJECT`, and `WANDB_DISABLED`. (And `WANDB_DISABLED` does not work properly in some cases).
Making it possible to initialize wandb outside Trainer will allow us to:
1. Add custom fields to wandb.config
1. Add tags, notes and just to make configuration as flexible as possible
1. Upload files
1. Use `wandb.log` more safely outside the Transformers code
It will also make the interaction with wandb more clear.
**B.** It is a much more clear interface then an env variable. The question here is which option should have the priority?
## Your contribution
There's a picture in my mind about how to do this and not to destroy backward compatibility. I can make a PR, but maybe need some minor help on writing tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5061/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5060/comments | https://api.github.com/repos/huggingface/transformers/issues/5060/events | https://github.com/huggingface/transformers/pull/5060 | 639,665,221 | MDExOlB1bGxSZXF1ZXN0NDM1MjEzOTE1 | 5,060 | Make default_data_collator more flexible and deprecate old behavior | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=h1) Report\n> Merging [#5060](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d5477baf7d87b9bdad386f2f317732b85277b06b&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `84.21%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5060 +/- ##\n==========================================\n+ Coverage 77.41% 77.43% +0.01% \n==========================================\n Files 130 130 \n Lines 22023 22029 +6 \n==========================================\n+ Hits 17050 17059 +9 \n+ Misses 4973 4970 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5060/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <50.00%> (+0.09%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5060/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `98.33% <93.33%> (+8.67%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=footer). Last update [d5477ba...857975c](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Refactored on @julien-c suggestion and removed the little hack to handle tensors and lists of ints (which was cauding a 4x slowdown on my tests). I just use the first features to test if we have a tensor (then stack) or not (then use torch.tensor).",
"LGTM!"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This PR does two things:
- avoid breaking changes when people had a custom `DataCollator` with a `collate_batch` method (there is still the breaking change with `DataCollator` not being a class anymore)
- makes `default_data_collator` more flexible by handling dicts on top of `InputExamples` classes coming from our examples
This fixes #5049 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5060/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5060",
"html_url": "https://github.com/huggingface/transformers/pull/5060",
"diff_url": "https://github.com/huggingface/transformers/pull/5060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5060.patch",
"merged_at": 1592421892000
} |
https://api.github.com/repos/huggingface/transformers/issues/5059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5059/comments | https://api.github.com/repos/huggingface/transformers/issues/5059/events | https://github.com/huggingface/transformers/pull/5059 | 639,637,030 | MDExOlB1bGxSZXF1ZXN0NDM1MTkxMDEy | 5,059 | [cleanup] examples test_run_squad uses tiny model | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=h1) Report\n> Merging [#5059](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d5477baf7d87b9bdad386f2f317732b85277b06b&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5059 +/- ##\n=======================================\n Coverage 77.41% 77.41% \n=======================================\n Files 130 130 \n Lines 22023 22023 \n=======================================\n Hits 17050 17050 \n Misses 4973 4973 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=footer). Last update [d5477ba...6bef869](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merging so that external contrib can pick up the next steps without conflicts. Feel free to add comments here or in #5057 "
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This speeds up the examples tests from 120s to 70s.
Note that this probably understates the difference, since all the relevant models are cached on my local machine.
We can also improve `test_run_glue` in a future PR.
I made an issue discussing improvements this PR does not fix: #5059
### More detail:
Before (master): 119.36s
```bash
============================ slowest test durations ============================
42.69s call examples/test_examples.py::ExamplesTests::test_run_squad
21.20s call examples/test_examples.py::ExamplesTests::test_run_glue
19.02s call examples/token-classification/test_ner_examples.py::ExamplesTests::test_run_ner
13.65s call examples/test_examples.py::ExamplesTests::test_run_language_modeling
3.80s call examples/test_examples.py::ExamplesTests::test_generation
```
After: 69.3 Seconds
```bash
============================ slowest test durations ============================
19.12s call examples/token-classification/test_ner_examples.py::ExamplesTests::test_run_ner
14.60s call examples/test_examples.py::ExamplesTests::test_run_language_modeling
13.49s call examples/test_examples.py::ExamplesTests::test_run_glue
3.87s call examples/summarization/test_summarization_examples.py::TestBartExamples::test_bart_run_sum_cli
3.08s call examples/translation/t5/test_t5_examples.py::TestT5Examples::test_t5_cli
2.64s call examples/summarization/test_summarization_examples.py::TestT5Examples::test_t5_cli
2.20s call examples/summarization/test_summarization_examples.py::TestBartExamples::test_t5_run_sum_cli
1.81s call examples/test_examples.py::ExamplesTests::test_run_squad
1.67s call examples/test_examples.py::ExamplesTests::test_generation
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5059",
"html_url": "https://github.com/huggingface/transformers/pull/5059",
"diff_url": "https://github.com/huggingface/transformers/pull/5059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5059.patch",
"merged_at": 1592330806000
} |
https://api.github.com/repos/huggingface/transformers/issues/5058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5058/comments | https://api.github.com/repos/huggingface/transformers/issues/5058/events | https://github.com/huggingface/transformers/issues/5058 | 639,636,718 | MDU6SXNzdWU2Mzk2MzY3MTg= | 5,058 | Error when loading Flaubert model | {
"login": "saharghannay",
"id": 41583809,
"node_id": "MDQ6VXNlcjQxNTgzODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/41583809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saharghannay",
"html_url": "https://github.com/saharghannay",
"followers_url": "https://api.github.com/users/saharghannay/followers",
"following_url": "https://api.github.com/users/saharghannay/following{/other_user}",
"gists_url": "https://api.github.com/users/saharghannay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saharghannay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saharghannay/subscriptions",
"organizations_url": "https://api.github.com/users/saharghannay/orgs",
"repos_url": "https://api.github.com/users/saharghannay/repos",
"events_url": "https://api.github.com/users/saharghannay/events{/privacy}",
"received_events_url": "https://api.github.com/users/saharghannay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Could you show what command you used to launch the script?",
"I was runing the token classification example : https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh with flaubert-large-cased as model name. \r\n\r\nI tried to use the downloaded model from https://huggingface.co/flaubert models, but after 100 epochs the results are very bad, the model did not learn any thing. \r\nI Don't understand what is the problem with flaubert-large-cased model. \r\nNote that flaubert-base-cased model give good results on NER taks.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
## Information
I am trying to run the example run_ner.pl using the Flaubert model.
But I got this error:
Traceback (most recent call last):
File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 191, in _check_seekable
f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/py37/lib/python3.7/site-packages/transformers/modeling_utils.py", line 516, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 549, in _load
_check_seekable(f)
File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 194, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 187, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/py37/lib/python3.7/site-packages/transformers/modeling_auto.py", line 1098, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/py37/lib/python3.7/site-packages/transformers/modeling_utils.py", line 519, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I got an other error when I added from_tf=True.
Do you have idea how can I solve this problem.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5058/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5057/comments | https://api.github.com/repos/huggingface/transformers/issues/5057/events | https://github.com/huggingface/transformers/issues/5057 | 639,635,502 | MDU6SXNzdWU2Mzk2MzU1MDI= | 5,057 | Examples tests improvements | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, @sshleifer I would like to work on this issue. Shall I take this up.\r\n\r\n",
"Yes, I would pick one item from the list to start with.\r\nMake sure you pull first, I just merged some improvements.",
"@sshleifer I will work on the first one.\r\n\r\nJust to be clear I will note down what I have understood and what I have in mind to do.\r\n\r\n1. The issue as per my understanding: The tests in the example folder are not up to the mark and we have to add certain parts to fix this. For this, as the first point suggests when running tests in the examples folder the tests is not checking if Cuda or fp16 is available.\r\n\r\n2. There are 4 tests in the `test_examples.py`\r\n- text-classification(run_glue)\r\n- language-modeling(run_language_modeling)\r\n- question-answering(run_squad)\r\n- text-generation(run_generation)\r\nso each should run with cuda or fp16 if available.\r\n\r\ncorrect if I am wrong.",
"Good idea.\r\n\r\n1) Yes. I think the desired behavior is\r\nif `torch.cuda.is_available()`:\r\n- assume fp16 is available\r\n- run the code with fp16 and cude.\r\n\r\nTry to do that for all tests. Some will likely break. You can add a TODO to those and keep them running on CPU for now.\r\n\r\n1b) You probably need a GPU to do this PR.\r\n\r\n2) There are more tests than that:\r\n\r\n```bash\r\n$ ls examples/**/test*.py\r\n\r\nexamples/adversarial/test_hans.py\r\nexamples/summarization/bertabs/test_utils_summarization.py\r\nexamples/summarization/test_summarization_examples.py\r\nexamples/test_examples.py\r\nexamples/token-classification/test_ner_examples.py\r\nexamples/translation/t5/test_t5_examples.py\r\n```\r\n",
"You don't need to cover all those tests. Feel free to break the work into very small PRs and tag me on them.",
"Thanks, @sshleifer for the clarification \r\n\r\nI will start working on this.",
"> 2. The `@slow` decorator used in the main tests is not importable, so there are no @slow tests.\r\n\r\nThis is no longer the case. \r\n\r\n```from transformers.testing_utils import slow```\r\n\r\nThis item can be removed.",
"> 3. `test_run_glue` uses distilbert-case-cased. It should use a smaller model, one of the `tiny` family [here](https://huggingface.co/models?search=sshleifer/tiny) or a new tiny model.\r\n\r\nI tried a few and either they have a wrong head dimension as in `sshleifer/tiny-distilbert-base-cased` (9x2), but tests are (2x2), so it won't load as is (`size mismatch for classifier.weight:` and `size mismatch for classifier.bias`), or they perform terribly with the current test settings.\r\n\r\nI also did an experiment for the same for the suggested inside the existing test:\r\n```\r\n def test_run_language_modeling(self):\r\n stream_handler = logging.StreamHandler(sys.stdout)\r\n logger.addHandler(stream_handler)\r\n # TODO: switch to smaller model like sshleifer/tiny-distilroberta-base\r\n```\r\nwith terrible results (perplexity > 5,000, whereas the current one < 35).\r\n\r\nSo when these tiny models are suggested as a replacement to speed things up, what things are to be sacrificed?\r\n",
"Happy to do big models and mark slow. I just don't want to do big models when we are only testing output shape.",
"> Happy to do big models and mark slow. I just don't want to do big models when we are only testing output shape.\r\n\r\nSo then we could write a test that uses a tiny model that does just that? i.e. no outcome quality checks. Leaving big models for quality checks with @slow.",
"Yes!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,601 | 1,601 | CONTRIBUTOR | null | There are a few things about the `examples/` tests that are suboptimal:
1. They never use cuda or fp16, even if they are available.
2. The `@slow` decorator used in the main tests is not importable, so there are no @slow tests.
3. `test_run_glue` uses distilbert-case-cased. It should use a smaller model, one of the `tiny` family [here](https://huggingface.co/models?search=sshleifer/tiny) or a new tiny model.
4. There is no test coverage for TPU.
Any help on any of these fronts would be much appreciated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5057/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5056/comments | https://api.github.com/repos/huggingface/transformers/issues/5056/events | https://github.com/huggingface/transformers/pull/5056 | 639,605,373 | MDExOlB1bGxSZXF1ZXN0NDM1MTY0ODU1 | 5,056 | Add more tests on tokenizers serialization - fix bugs | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=h1) Report\n> Merging [#5056](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b28b53713161a6299c757c32f7179a2cb2d8cbd7&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5056 +/- ##\n==========================================\n+ Coverage 77.96% 78.02% +0.05% \n==========================================\n Files 138 138 \n Lines 23838 23847 +9 \n==========================================\n+ Hits 18585 18606 +21 \n+ Misses 5253 5241 -12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.47% <100.00%> (+0.96%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.82% <100.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <100.00%> (+2.31%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.38% <0.00%> (-1.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=footer). Last update [b28b537...8f87f25](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,593 | 1,593 | MEMBER | null | Adds more tests on tokenizer serialization (test when adding tokens, special tokens, etc).
Tokenizer's serialization was not thoroughly tested and actually had quite some holes and bugs. Fix related issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5056/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5056",
"html_url": "https://github.com/huggingface/transformers/pull/5056",
"diff_url": "https://github.com/huggingface/transformers/pull/5056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5056.patch",
"merged_at": 1593028389000
} |
https://api.github.com/repos/huggingface/transformers/issues/5055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5055/comments | https://api.github.com/repos/huggingface/transformers/issues/5055/events | https://github.com/huggingface/transformers/issues/5055 | 639,600,385 | MDU6SXNzdWU2Mzk2MDAzODU= | 5,055 | How can I load the finetuned BART model to memory? | {
"login": "tomaszgarbus",
"id": 11790160,
"node_id": "MDQ6VXNlcjExNzkwMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/11790160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaszgarbus",
"html_url": "https://github.com/tomaszgarbus",
"followers_url": "https://api.github.com/users/tomaszgarbus/followers",
"following_url": "https://api.github.com/users/tomaszgarbus/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaszgarbus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaszgarbus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaszgarbus/subscriptions",
"organizations_url": "https://api.github.com/users/tomaszgarbus/orgs",
"repos_url": "https://api.github.com/users/tomaszgarbus/repos",
"events_url": "https://api.github.com/users/tomaszgarbus/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaszgarbus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #4144 ",
"Thanks!"
] | 1,592 | 1,592 | 1,592 | NONE | null | I have finetuned a `facebook/bart-large` model following the example here: https://github.com/huggingface/transformers/blob/master/examples/summarization/finetune.py
As an output I got a `checkpointcheckpoint_ckpt_epoch_0.ckpt` file. How can I create a BartForConditionalGeneration instance with updated weights? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5055/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5054/comments | https://api.github.com/repos/huggingface/transformers/issues/5054/events | https://github.com/huggingface/transformers/pull/5054 | 639,594,388 | MDExOlB1bGxSZXF1ZXN0NDM1MTU2MDM2 | 5,054 | Add pad_to_multiple_of on tokenizers (reimport) | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=h1) Report\n> Merging [#5054](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5054 +/- ##\n==========================================\n+ Coverage 79.08% 79.09% +0.01% \n==========================================\n Files 138 138 \n Lines 24078 24081 +3 \n==========================================\n+ Hits 19041 19047 +6 \n+ Misses 5037 5034 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.48% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <ø> (ø)` | |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.70% <100.00%> (+0.54%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=footer). Last update [24f46ea...449cba1](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,593 | 1,593 | MEMBER | null | Reimported from #4731.
Introduce `pad_to_multiple_of` on both slow and fast tokenizers. This parameter introduces the "bucketizaton behaviour" also refered as Shape Polymorphism.
This is especially usefull when targetting NN dedicated accelerators such as:
- NVidia Tensor Core (on >= Volta Architecture)
- XLA (PyTorch TPU)
- XLA (Jax / Flax)
Bonus:
- Fix RobertaTokenizer when input is empty `text[0].is_space()` would crash (#3608).
Edit (@thomwolf):
- updated to the new API
- raise a `ValueError` if you want to truncation to a length which is not a multiple of `pad_to_multiple_of` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5054/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5054",
"html_url": "https://github.com/huggingface/transformers/pull/5054",
"diff_url": "https://github.com/huggingface/transformers/pull/5054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5054.patch",
"merged_at": 1593165358000
} |
https://api.github.com/repos/huggingface/transformers/issues/5053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5053/comments | https://api.github.com/repos/huggingface/transformers/issues/5053/events | https://github.com/huggingface/transformers/issues/5053 | 639,570,483 | MDU6SXNzdWU2Mzk1NzA0ODM= | 5,053 | T5 model for classification doesn't work properly for large number of classes. | {
"login": "HiteshVamshi",
"id": 11312735,
"node_id": "MDQ6VXNlcjExMzEyNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11312735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HiteshVamshi",
"html_url": "https://github.com/HiteshVamshi",
"followers_url": "https://api.github.com/users/HiteshVamshi/followers",
"following_url": "https://api.github.com/users/HiteshVamshi/following{/other_user}",
"gists_url": "https://api.github.com/users/HiteshVamshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HiteshVamshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HiteshVamshi/subscriptions",
"organizations_url": "https://api.github.com/users/HiteshVamshi/orgs",
"repos_url": "https://api.github.com/users/HiteshVamshi/repos",
"events_url": "https://api.github.com/users/HiteshVamshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/HiteshVamshi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, @HiteshVamshi , what you are trying is highly experimental, I haven't seen anyone using T5 for 100 class classification. So you'll probably need to experiment with it.\r\n\r\nI would like to know few more details\r\n1) What is the size of your dataset\r\n2) which version of t5 are you using (t5-small, t5-base, t5-large etc)\r\n3) how many epochs ",
"Hi, @patil-suraj , I was also experimenting with the T5 model. I got a good performance for 50 classes so tried with more.\r\n-The size of the dataset used was 65k.\r\n-I used T5-small.\r\n-I tested for 10 epochs. But there was no significant improvement after 3 epochs.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Can we use the T5 model with only 2000 samples for classifications (not a lot of classes, just around 10)? what about binary classification?"
] | 1,592 | 1,678 | 1,598 | NONE | null | T5 model works properly for 50 class classification but when we try for 100 classes it outputs empty string ("") for a large number of test data. Is this model suitable for multi-class classification with a large number of classes? If yes, what do you think might be the problem with what I am doing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5053/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5052/comments | https://api.github.com/repos/huggingface/transformers/issues/5052/events | https://github.com/huggingface/transformers/issues/5052 | 639,495,208 | MDU6SXNzdWU2Mzk0OTUyMDg= | 5,052 | How to cosume movement-pruning .h5 models in QnA pipeline | {
"login": "pranavpawar3",
"id": 39311422,
"node_id": "MDQ6VXNlcjM5MzExNDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/39311422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranavpawar3",
"html_url": "https://github.com/pranavpawar3",
"followers_url": "https://api.github.com/users/pranavpawar3/followers",
"following_url": "https://api.github.com/users/pranavpawar3/following{/other_user}",
"gists_url": "https://api.github.com/users/pranavpawar3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranavpawar3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranavpawar3/subscriptions",
"organizations_url": "https://api.github.com/users/pranavpawar3/orgs",
"repos_url": "https://api.github.com/users/pranavpawar3/repos",
"events_url": "https://api.github.com/users/pranavpawar3/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranavpawar3/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello,\r\nI've only work with PyTorch for all the experiments for movement pruning (even though I've converted the checkpoints in the hub to their TF version).\r\nThe instructions in the notebook show you how you can load a optimized version of the checkpoint (pruning+quantization) which was saved with hdf5 with an .h5 extension. It is not a tensorflow checkpoint. You would have to adapt the steps in the notebook to do it in tensorflow.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,601 | 1,601 | CONTRIBUTOR | null | # ❓ Questions & Help
## Details
Hi, I am working on the movement pruning method for QnA models. I have performed all the [given steps](https://github.com/huggingface/transformers/tree/master/examples/movement-pruning) in order to generate the pruned model and then used [this notebook](https://github.com/huggingface/transformers/blob/master/examples/movement-pruning/Saving_PruneBERT.ipynb) to generate the .h5 model files.
Though I am facing an issue with consuming this model with `QuestionAnsweringPipeline`.
For loading the model; the config file is copied from `BertForQuestionAnswering`, as the pruning repo does not generate any config file.
`BERT_PRUNED_PATH = SERIALIZATION_DIR+'/dbg' +'/BERT_Pruned/'`
`config = BertConfig.from_json_file(BERT_PRUNED_PATH+'config.json')`
*we used squad_sparse.h5 which is renamed to tf_model.h5*
`model_BERT_pruned =
TFBertForQuestionAnswering.from_pretrained(BERT_PRUNED_PATH+'tf_model.h5',config=config)`
**or**
`model_BERT_pruned =
BertForQuestionAnswering.from_pretrained(BERT_PRUNED_PATH+'tf_model.h5',config=config,from_tf=True)
`
config =
{
"architectures": [
"BertForQuestionAnswering"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 30522
}
For pipeline,
`
fill_mask_qna_BERT_pruned = pipeline(
"question-answering",
model=model_BERT_pruned,
tokenizer=tokenizer,
framework="tf"
)
`
Now, when I test the pipeline on questions and context, I get very random answers, probably because the model is not getting loaded in a proper fashion as it should be.
@VictorSanh Can you share the instructions on how to load these .h5 format pruned model using Huggingface modules? Or is there any other way to consume the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5052/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5051/comments | https://api.github.com/repos/huggingface/transformers/issues/5051/events | https://github.com/huggingface/transformers/pull/5051 | 639,486,133 | MDExOlB1bGxSZXF1ZXN0NDM1MDY2MDQ1 | 5,051 | Fix LR decay in TF Trainer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=h1) Report\n> Merging [#5051](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `0.84%`.\n> The diff coverage is `6.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5051 +/- ##\n==========================================\n+ Coverage 77.08% 77.93% +0.84% \n==========================================\n Files 138 138 \n Lines 23841 23855 +14 \n==========================================\n+ Hits 18379 18592 +213 \n+ Misses 5462 5263 -199 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.44% <6.66%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=footer). Last update [9022ef0...ea9f19f](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Just rebase on master, should be ok to merge now, unless you have some other things you want me to change @LysandreJik ?"
] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | This PR fixes mainly the issue #5045. It also provides a better alignment with the PT trainer with:
- a `set_seed()` function
- use the `logging_first_step` argument
- better logging message when training and load from checkpoint | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5051/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5051",
"html_url": "https://github.com/huggingface/transformers/pull/5051",
"diff_url": "https://github.com/huggingface/transformers/pull/5051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5051.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5050/comments | https://api.github.com/repos/huggingface/transformers/issues/5050/events | https://github.com/huggingface/transformers/issues/5050 | 639,471,859 | MDU6SXNzdWU2Mzk0NzE4NTk= | 5,050 | TypeError: function() argument 1 must be code, not str | {
"login": "manhlab",
"id": 47383746,
"node_id": "MDQ6VXNlcjQ3MzgzNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/47383746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manhlab",
"html_url": "https://github.com/manhlab",
"followers_url": "https://api.github.com/users/manhlab/followers",
"following_url": "https://api.github.com/users/manhlab/following{/other_user}",
"gists_url": "https://api.github.com/users/manhlab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manhlab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manhlab/subscriptions",
"organizations_url": "https://api.github.com/users/manhlab/orgs",
"repos_url": "https://api.github.com/users/manhlab/repos",
"events_url": "https://api.github.com/users/manhlab/events{/privacy}",
"received_events_url": "https://api.github.com/users/manhlab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I get this error when create datacollator class",
"Could you provide more information? It's a bit hard to help you here. What is the code you're using and what is the stacktrace?",
"thanks you! i have the same problem with this #5049 "
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
TypeError: function() argument 1 must be code, not str
## Information
@dataclass
class T2TDataCollator(DataCollator):
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5050/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5049/comments | https://api.github.com/repos/huggingface/transformers/issues/5049/events | https://github.com/huggingface/transformers/issues/5049 | 639,462,781 | MDU6SXNzdWU2Mzk0NjI3ODE= | 5,049 | DataCollator problem | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"i have the same. It is new bug. i run this week ago and worked",
"try this:\r\n```python\r\nclass T2TDataCollator:\r\n def __call__(self, batch):\r\n```\r\n",
"@abrozso Hi and thanks for the hint, however, it doesn't seem to fix the problem.\r\nI got the following error when the fine-tuning starts:\r\n\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.\r\n06/16/2020 09:03:23 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.\r\n06/16/2020 09:03:23 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - ***** Running training *****\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - Num examples = 13\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - Num Epochs = 4\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - Instantaneous batch size per device = 8\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - Gradient Accumulation steps = 4\r\n06/16/2020 09:03:23 - INFO - transformers.trainer - Total optimization steps = 0\r\nException in thread Thread-12:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/usr/lib/python3.6/threading.py\", line 864, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py\", line 141, in _loader_worker\r\n _, data = next(data_iter)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 352, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 392, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n return self.collate_fn(data)\r\nTypeError: 'T2TDataCollator' object is not callable\r\n",
"@antoniomastro1996: perhaps you can try the xla nightly version (if you are not using that already)",
"@abrozso unfortunately, I'm already using the nightly version",
"You need to instantiate your `T2TDataCollator`: `data_collator = T2TDataCollator()` (or you could make it a simple function if you don't need any state).\r\nWill fix the backward-compatibility this morning.",
"The issue is that it is not a class anymore.",
"Yes, that will stay. Just remove the subclass to `DataCollator` and everything should work:\r\n```\r\nclass MyDataCollator:\r\n def __call__(self, features): ...\r\n```\r\nor (once #5060 is merged)\r\n```\r\nclass MyDataCollator:\r\n def collate_batch(self, features): ...\r\n```\r\nbut this will throw a deprecation warning.",
"I respectfully disagree with the decision to keep `DataCollator` as a callable. Very many existing trainers and notebooks will break as a result. I think many people would agree that it would be best to create a `DataCollatorCallable` callable or something similar as an addition, not as a replacement.",
"Hi all,\r\n\r\nI'm facing issues with this part of the code (post making changes as suggested above) in T5-Base for QA.\r\n\r\n```\r\nimport dataclasses\r\nimport logging\r\nimport os\r\nimport sys\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Dict, List, Optional\r\n\r\nimport numpy as np\r\nimport torch\r\n\r\nfrom transformers import T5ForConditionalGeneration, T5Tokenizer, EvalPrediction\r\nfrom transformers import (\r\n HfArgumentParser,\r\n DataCollator,\r\n Trainer,\r\n TrainingArguments,\r\n set_seed,\r\n)\r\n\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n# prepares lm_labels from target_ids, returns examples with keys as expected by the forward method\r\n# this is necessacry because the trainer directly passes this dict as arguments to the model\r\n# so make sure the keys match the parameter names of the forward method\r\n@dataclass\r\nclass T2TDataCollator: #(DataCollator)\r\n def __call__(self, batch: List) -> Dict[str, torch.Tensor]: #\r\n \"\"\"\r\n Take a list of samples from a Dataset and collate them into a batch.\r\n Returns:\r\n A dictionary of tensors\r\n \"\"\"\r\n input_ids = torch.stack([example['input_ids'] for example in batch])\r\n lm_labels = torch.stack([example['target_ids'] for example in batch])\r\n lm_labels[lm_labels[:, :] == 0] = -100\r\n attention_mask = torch.stack([example['attention_mask'] for example in batch])\r\n decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch])\r\n \r\n\r\n return {\r\n 'input_ids': input_ids, \r\n 'attention_mask': attention_mask,\r\n 'lm_labels': lm_labels, \r\n 'decoder_attention_mask': decoder_attention_mask\r\n }\r\n```\r\n\r\n**Which is fetching this error:-**\r\n\r\n```\r\nException in thread Thread-12:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/usr/lib/python3.6/threading.py\", line 864, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py\", line 133, in _loader_worker\r\n _, data = next(data_iter)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 517, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 557, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n return self.collate_fn(data)\r\n File \"<ipython-input-7-7b8c1b4d4c9a>\", line 36, in __call__\r\n lm_labels = torch.stack([example['target_ids'] for example in batch])\r\n File \"<ipython-input-7-7b8c1b4d4c9a>\", line 36, in <listcomp>\r\n lm_labels = torch.stack([example['target_ids'] for example in batch])\r\nKeyError: 'target_ids'\r\n```\r\n\r\nMy train and validation dataset has 'target_ids' field (read from `datasets.Dataset.from_pandas()` method and mapped the `add_eos_to_examples` and `convert_to_features` successfully):\r\n\r\n`train_dataset['target_ids']`\r\n```\r\ntensor([[ 1027, 9533, 3440, ..., 0, 0, 0],\r\n [ 7327, 1387, 11597, ..., 0, 0, 0],\r\n [ 272, 5, 7130, ..., 0, 0, 0],\r\n ...,\r\n [15810, 1, 0, ..., 0, 0, 0],\r\n [ 7107, 1, 0, ..., 0, 0, 0],\r\n [ 454, 5, 134, ..., 0, 0, 0]])\r\n```\r\n\r\n`valid_dataset['target_ids']`\r\n```\r\ntensor([[15810, 1, 0, ..., 0, 0, 0],\r\n [ 4190, 4329, 1, ..., 0, 0, 0],\r\n [ 4329, 11, 7107, ..., 0, 0, 0],\r\n ...,\r\n [ 3, 4, 1, ..., 0, 0, 0],\r\n [ 3, 4, 1, ..., 0, 0, 0],\r\n [ 8642, 4425, 9, ..., 0, 0, 0]])\r\n```\r\n\r\nI am unable to fetch this field using `class T2TDataCollator:`. Please assist, thank you!"
] | 1,592 | 1,610 | 1,592 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi everybody
I found an error in the following Colab:
https://colab.research.google.com/drive/1jwXgtOXE8v8_qkiOCbjFQRFC5semK8T7?usp=sharing
Specifically, As far I understand something changed with the implementation of the following snippet:
class T2TDataCollator(DataCollator):
def collate_batch(self, batch: List) -> Dict[str, torch.Tensor]:
..........
<br>
I got the following error: **TypeError: function() argument 1 must be code, not str**
Can you suggest any workarounds? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5049/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5048/comments | https://api.github.com/repos/huggingface/transformers/issues/5048/events | https://github.com/huggingface/transformers/issues/5048 | 639,390,375 | MDU6SXNzdWU2MzkzOTAzNzU= | 5,048 | After I resume learning, loss is greater than prev checkpoint | {
"login": "urekalion",
"id": 4244158,
"node_id": "MDQ6VXNlcjQyNDQxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4244158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urekalion",
"html_url": "https://github.com/urekalion",
"followers_url": "https://api.github.com/users/urekalion/followers",
"following_url": "https://api.github.com/users/urekalion/following{/other_user}",
"gists_url": "https://api.github.com/users/urekalion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urekalion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urekalion/subscriptions",
"organizations_url": "https://api.github.com/users/urekalion/orgs",
"repos_url": "https://api.github.com/users/urekalion/repos",
"events_url": "https://api.github.com/users/urekalion/events{/privacy}",
"received_events_url": "https://api.github.com/users/urekalion/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! What did you use to train your model? Was it the `run_language_modeling` script? Do you happen to have the command you used?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
## Details
I learned albert mlm model.
Often an error occurs (memory exception, ... )
so i resumed learning model from last checkpoint
but, training loss goes back to the beginning.
what do i need to check ?
thank,

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5048/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5047/comments | https://api.github.com/repos/huggingface/transformers/issues/5047/events | https://github.com/huggingface/transformers/issues/5047 | 639,333,207 | MDU6SXNzdWU2MzkzMzMyMDc= | 5,047 | How to use 16 token types in pretrained Albert/BERT? | {
"login": "Traeyee",
"id": 12761196,
"node_id": "MDQ6VXNlcjEyNzYxMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12761196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Traeyee",
"html_url": "https://github.com/Traeyee",
"followers_url": "https://api.github.com/users/Traeyee/followers",
"following_url": "https://api.github.com/users/Traeyee/following{/other_user}",
"gists_url": "https://api.github.com/users/Traeyee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Traeyee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Traeyee/subscriptions",
"organizations_url": "https://api.github.com/users/Traeyee/orgs",
"repos_url": "https://api.github.com/users/Traeyee/repos",
"events_url": "https://api.github.com/users/Traeyee/events{/privacy}",
"received_events_url": "https://api.github.com/users/Traeyee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe I don't get it, but can't you simply do:\r\n\r\n```\r\nfrom transformers.modeling_bert import BertConfig, BertModel\r\n\r\nbconfig = BertConfig.from_pretrained('bert-base-uncased')\r\nbconfig.type_vocab_size = 16\r\nmodel = BertModel(bconfig)\r\nmodel.parameters\r\n# ...\r\n# (token_type_embeddings): Embedding(16, 768)\r\n# ...\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Maybe I don't get it, but can't you simply do:\r\n> \r\n> ```\r\n> from transformers.modeling_bert import BertConfig, BertModel\r\n> \r\n> bconfig = BertConfig.from_pretrained('bert-base-uncased')\r\n> bconfig.type_vocab_size = 16\r\n> model = BertModel(bconfig)\r\n> model.parameters\r\n> # ...\r\n> # (token_type_embeddings): Embedding(16, 768)\r\n> # ...\r\n> ```\r\n\r\nYes. What I can do is to reassign the token type embeddings after init, and thing is if there is any risk to do this. But I don't continue on this because my dialogue task is too difficult for almost language model even with BERT hahhhhh",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,604 | 1,604 | NONE | null | I have a dialogue task and I use token type to distinguish the diffenrent state of the different speeches, but all the pretrained models I can find are of type_vocab_size=2. To accomplish my goal, I have to rewrite many codes in a dirty way. So I want to ask is there an elegant way to restore the pretrained weights and ignore the token type embeddings at the same time? Roughly modifying the type_vocab_size in the given config.json will certainly raise an error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5046/comments | https://api.github.com/repos/huggingface/transformers/issues/5046/events | https://github.com/huggingface/transformers/issues/5046 | 639,332,077 | MDU6SXNzdWU2MzkzMzIwNzc= | 5,046 | ref #4733 | {
"login": "etveritas",
"id": 27916175,
"node_id": "MDQ6VXNlcjI3OTE2MTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/27916175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/etveritas",
"html_url": "https://github.com/etveritas",
"followers_url": "https://api.github.com/users/etveritas/followers",
"following_url": "https://api.github.com/users/etveritas/following{/other_user}",
"gists_url": "https://api.github.com/users/etveritas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/etveritas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etveritas/subscriptions",
"organizations_url": "https://api.github.com/users/etveritas/orgs",
"repos_url": "https://api.github.com/users/etveritas/repos",
"events_url": "https://api.github.com/users/etveritas/events{/privacy}",
"received_events_url": "https://api.github.com/users/etveritas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten hello, Is there a way to solve this problem",
"Closing since a duplicate of https://github.com/huggingface/transformers/issues/4733. "
] | 1,592 | 1,592 | 1,592 | NONE | null |
# 🐛 Bug
## Information
Model I am using TFBertEncoder:
Language I am using the model on English:
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. When I use, TFBertEncoder, I get an error.
Here is my code.
```python
import tensorflow as tf
import numpy as np
from transformers.modeling_tf_bert import BertConfig, TFBertEncoder
print(tf.__name__, tf.__version__)
input_a = tf.keras.layers.Input(shape=(91, 128))
config = BertConfig()
config.hidden_size = 128
config.num_attention_heads = 4
# config.output_attentions = False
# config.output_hidden_states = False
head_mask = [None for _ in range(config.num_hidden_layers)]
encoder_output = TFBertEncoder(config=config)([input_a, None, head_mask])[0]
print(encoder_output.shape)
test_out = tf.keras.layers.Dense(128)(encoder_output)
print(test_out.shape)
```
## Expected behavior
Here is the error:
```
(None, 91, 128)
2020-06-03 11:18:10.160647: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Failed precondition: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist.
[[{{node output_23/dense/BiasAdd/ReadVariableOp}}]]
Traceback (most recent call last):
File "D:/python/tx/TEST.py", line 16, in <module>
a = tf.keras.layers.Dense(128)(encoder_output)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 720, in __call__
base_layer_utils.create_keras_history(inputs)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 187, in create_keras_history
_, created_layers = _create_keras_history_helper(tensors, set(), [])
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
[Previous line repeated 5 more times]
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 247, in _create_keras_history_helper
constants[i] = backend.function([], op_input)([])
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 3727, in __call__
outputs = self._graph_fn(*converted_inputs)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1551, in __call__
return self._call_impl(args, kwargs)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1591, in _call_impl
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call
ctx=ctx)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist.
[[node output_23/dense/BiasAdd/ReadVariableOp (defined at /python/tx/TEST.py:16) ]] [Op:__inference_keras_scratch_graph_5205]
Function call stack:
keras_scratch_graph
```
## Environment info
* `transformers` version: 2.3.0 (in conda list)
* Platform:
* Python version:3.7
* PyTorch version (GPU?):
* Tensorflow version (GPU?):TF2.1.0(GPU)
* Using GPU in script?:
* Using distributed or parallel set-up in script?:No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5046/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5045/comments | https://api.github.com/repos/huggingface/transformers/issues/5045/events | https://github.com/huggingface/transformers/issues/5045 | 639,310,313 | MDU6SXNzdWU2MzkzMTAzMTM= | 5,045 | TFTrainer does not consider number of epochs when calculating learning rate | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is fixed"
] | 1,592 | 1,594 | 1,594 | CONTRIBUTOR | null | # 🐛 Bug
`TFTrainer` does not consider number of epochs when calculating learning rate
## Information
When using `TFTrainer`, learning rate decreases to 0 at the end of the first epoch, even when we want to train on multiple epochs.
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: run_tf_glue with mrpc task
* [ ] my own task or dataset: (give details below)
## To reproduce
`python run_tf_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/test_hf/ --overwrite_output_dir --logging_dir hf --evaluate_during_training --eval_steps 50 --logging_steps 10`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
This is what we get, where learning rate is null after the first epoch:

You can refer to [W&B run](https://app.wandb.ai/borisd13/huggingface/runs/2zcsfumy?workspace=user-borisd13) for more details.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Learning rate should slowly decrease until end of 3rd epoch.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes (one only)
- Using distributed or parallel set-up in script?: No
@jplu I recorded the issue here so we don't forget to fix it.
Let me know if I can help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5045/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5044/comments | https://api.github.com/repos/huggingface/transformers/issues/5044/events | https://github.com/huggingface/transformers/pull/5044 | 639,306,135 | MDExOlB1bGxSZXF1ZXN0NDM0OTIyNTE1 | 5,044 | refactor(wandb): consolidate import | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=h1) Report\n> Merging [#5044](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9f8a5312e92541ff9a5f483fc4907ec87da876e&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `64.28%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5044 +/- ##\n==========================================\n+ Coverage 77.39% 77.40% +0.01% \n==========================================\n Files 130 130 \n Lines 22018 22014 -4 \n==========================================\n Hits 17041 17041 \n+ Misses 4977 4973 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5044/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `78.26% <50.00%> (-21.74%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5044/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.52% <100.00%> (-0.06%)` | :arrow_down: |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5044/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.69% <100.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=footer). Last update [f9f8a53...47b9975](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This PR consolidates the import logic of wandb as suggested [here](https://github.com/huggingface/transformers/pull/4946#discussion_r440070708) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5044/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5044",
"html_url": "https://github.com/huggingface/transformers/pull/5044",
"diff_url": "https://github.com/huggingface/transformers/pull/5044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5044.patch",
"merged_at": 1592293244000
} |
https://api.github.com/repos/huggingface/transformers/issues/5043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5043/comments | https://api.github.com/repos/huggingface/transformers/issues/5043/events | https://github.com/huggingface/transformers/pull/5043 | 639,276,106 | MDExOlB1bGxSZXF1ZXN0NDM0ODk4NDM2 | 5,043 | Fix marian tokenizer save pretrained | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=h1) Report\n> Merging [#5043](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/36434220fc807c5015bc8f0f1e50ab21f7d34914&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5043 +/- ##\n=======================================\n Coverage 77.36% 77.37% \n=======================================\n Files 130 130 \n Lines 21989 21990 +1 \n=======================================\n+ Hits 17012 17014 +2 \n+ Misses 4977 4976 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5043/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.85% <100.00%> (+0.96%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=footer). Last update [3643422...5899f7a](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5043/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5043",
"html_url": "https://github.com/huggingface/transformers/pull/5043",
"diff_url": "https://github.com/huggingface/transformers/pull/5043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5043.patch",
"merged_at": 1592315300000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5042/comments | https://api.github.com/repos/huggingface/transformers/issues/5042/events | https://github.com/huggingface/transformers/issues/5042 | 639,270,838 | MDU6SXNzdWU2MzkyNzA4Mzg= | 5,042 | ❓ [TFTrainer] How to run on 8 TPU cores ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"Hello !\r\n\r\nThis is because the `--tpu_num_cores` is not taken into account yet. If you want to use TPUs, just fill the TPU name with `--tpu_name` and it will detect automatically the number of cores. For now TPUs with TF Trainer is under development so some use cases might not work properly.",
"Thanks for the input @jplu \r\n\r\nSo as you mentioned the number of TPU cores is automatically detected. It is accessible with `training_args.n_gpu`.\r\n\r\nAlso I didn't notice but here : https://github.com/huggingface/transformers/blob/e4aaa4580515446cd5a2972ab42fec0b95819c84/src/transformers/training_args.py#L150\r\n\r\nThe batch size is automatically adjusted to the number of cores.\r\n\r\nSo the behavior I observed is completely normal, as `--per_device_train_batch_size` is the batch size **per TPU cores**. \r\n\r\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | # ❓ Questions & Help
I'm trying to run TFTrainer on 8 TPU cores, but I don't understand how to make it work.
I tried running my script with the flags `--tpu_num_cores 8 --per_device_train_batch_size 8`, expecting each core to handle a batch size of 1.
But when I print the shape of my inputs, I have `[8, x]` instead of `[1, x]`, which leads to memory error.
---
If I start the training with `--tpu_num_cores 8 --per_device_train_batch_size 1`, the shape of inputs is correct (`[1, x]`), but the number of optimization steps computed is not correct (If I have 8k samples, it says I have 8k optimization steps, but I expected 1k steps because I am using 8 TPU cores...).
---
Am I doing something wrong ? **How can I train on 8 TPU cores, with a batch size of 1 for each core ?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5042/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5041/comments | https://api.github.com/repos/huggingface/transformers/issues/5041/events | https://github.com/huggingface/transformers/issues/5041 | 639,256,274 | MDU6SXNzdWU2MzkyNTYyNzQ= | 5,041 | How can I use tokenizer.encode_plus to input and encode 2 sentences - (query,answer) pair for training a BERT binary classifier? | {
"login": "soumya-ranjan-sahoo",
"id": 36735094,
"node_id": "MDQ6VXNlcjM2NzM1MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/36735094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soumya-ranjan-sahoo",
"html_url": "https://github.com/soumya-ranjan-sahoo",
"followers_url": "https://api.github.com/users/soumya-ranjan-sahoo/followers",
"following_url": "https://api.github.com/users/soumya-ranjan-sahoo/following{/other_user}",
"gists_url": "https://api.github.com/users/soumya-ranjan-sahoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soumya-ranjan-sahoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soumya-ranjan-sahoo/subscriptions",
"organizations_url": "https://api.github.com/users/soumya-ranjan-sahoo/orgs",
"repos_url": "https://api.github.com/users/soumya-ranjan-sahoo/repos",
"events_url": "https://api.github.com/users/soumya-ranjan-sahoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/soumya-ranjan-sahoo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Here my intention is to train a BERT binary classifier which classifies if an answer corresponding to a given query is correct. How do I proceed to encode the query, answer pair in the Input?",
"@soumya-ranjan-sahoo You could check the [token type ids](https://huggingface.co/transformers/glossary.html#token-type-ids). Please feed the question and answer to the, for instance, encode_plus function and generate type type ids. \r\n\r\nThen you could just feed the token type ids with, for instance, input ids and attention masks to conduct classification",
"@bright1993ff66 Great. I was successful in my experiment. But now I have a follow-up question. I understand the maximum length of the permissible words for BERT is 512. In my case (sentence pair classification) does that imply the combined word length for the query and the answer has to be 512 since I have huge answers for my experiment. \r\nSurprisingly I was able to fine-tune BERT with query and answer (most query-answer pair have a combined word length of more than 512), and BERT didn't throw any error or warnings. How did it fine-tune or what it exactly did with such sentences? \r\n\r\nThank you. \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5041/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5040/comments | https://api.github.com/repos/huggingface/transformers/issues/5040/events | https://github.com/huggingface/transformers/issues/5040 | 639,232,413 | MDU6SXNzdWU2MzkyMzI0MTM= | 5,040 | "AutoTokenizer.from_pretrained" does not work when loading a pretrained MarianTokenizer from a local directory | {
"login": "erikchwang",
"id": 16256959,
"node_id": "MDQ6VXNlcjE2MjU2OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikchwang",
"html_url": "https://github.com/erikchwang",
"followers_url": "https://api.github.com/users/erikchwang/followers",
"following_url": "https://api.github.com/users/erikchwang/following{/other_user}",
"gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions",
"organizations_url": "https://api.github.com/users/erikchwang/orgs",
"repos_url": "https://api.github.com/users/erikchwang/repos",
"events_url": "https://api.github.com/users/erikchwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikchwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I noticed that after saving the pretrained MarianTokenizer to \"my_dir\", the \"source.spm\" file and \"target.spm\" file are actually named as:\r\n\r\n> 1bec78f268e25152d11e6efa41998f2ebebe3ce5452c952c90fc7264c8c45a5b.23f506277c63e64e484c4de9d754a6625e5ba734bb6153470be9b7ffdb7c4ac5\r\n\r\nand\r\n\r\n> 5f95a1efcd8b6093955eb77d42cf97bde71563395863991bd96ad0832776f409.52488b746595fe55ab4afaebb1c23e29994354ddfebd6eddb77815395dc1d604\r\n\r\nWhen I changed the file names back to \"source.spm\" and \"target.spm\", the error disappears.",
"I figured it out! The spm files are coming from the cache.\r\nSo their names are not human readable! Fixed by tomorrow.",
"Thanks a lot... Will this fix be included in the next release?",
"Yes!",
"Same issue exists for `albert` models also",
"Please make a new issue with instructions to reproduce. Thanks!",
"Did you ever solve this for Albert models? @mittalsuraj18 "
] | 1,592 | 1,642 | 1,592 | NONE | null | # 🐛 Bug
## Information
I want to save MarianConfig, MarianTokenizer, and MarianMTModel to a local directory ("my_dir") and then load them:
> import transformers
>
> transformers.AutoConfig.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir")
> transformers.AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir")
> transformers.AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir")
>
> config = transformers.AutoConfig.from_pretrained("my_dir")
> tokenizer = transformers.AutoTokenizer.from_pretrained("my_dir")
> model = transformers.AutoModelWithLMHead.from_pretrained("my_dir")
But the above code failed when loading the saved MarianTokenizer from "my_dir":
> Traceback (most recent call last):
> File "<input>", line 8, in <module>
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 206, in from_pretrained
> return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 911, in from_pretrained
> return cls._from_pretrained(*inputs, **kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1062, in _from_pretrained
> tokenizer = cls(*init_inputs, **init_kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_marian.py", line 83, in __init__
> self.spm_source = load_spm(source_spm)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_marian.py", line 236, in load_spm
> spm.Load(path)
> File "/Users/anaconda/lib/python3.6/site-packages/sentencepiece.py", line 367, in Load
> return self.LoadFromFile(model_file)
> File "/Users/anaconda/lib/python3.6/site-packages/sentencepiece.py", line 177, in LoadFromFile
> return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
> TypeError: not a string
> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5040/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5039/comments | https://api.github.com/repos/huggingface/transformers/issues/5039/events | https://github.com/huggingface/transformers/pull/5039 | 639,226,874 | MDExOlB1bGxSZXF1ZXN0NDM0ODU1MjI3 | 5,039 | Ability to pickle/unpickle BatchEncoding pickle (reimport) | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=h1) Report\n> Merging [#5039](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/36434220fc807c5015bc8f0f1e50ab21f7d34914&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5039 +/- ##\n=======================================\n Coverage 77.36% 77.37% \n=======================================\n Files 130 130 \n Lines 21989 21998 +9 \n=======================================\n+ Hits 17012 17021 +9 \n Misses 4977 4977 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.69% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=footer). Last update [3643422...5e40fe4](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | Overrides the methods get_state() & set_state() to (respectively) export the content of the underlying data dictionnary and - if defined - the content of encodings.
Unittests added to covert the serialization & deserialization of all the exported properties.
Reimported from #4515 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5039/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5039",
"html_url": "https://github.com/huggingface/transformers/pull/5039",
"diff_url": "https://github.com/huggingface/transformers/pull/5039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5039.patch",
"merged_at": 1592292326000
} |
https://api.github.com/repos/huggingface/transformers/issues/5038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5038/comments | https://api.github.com/repos/huggingface/transformers/issues/5038/events | https://github.com/huggingface/transformers/issues/5038 | 639,221,809 | MDU6SXNzdWU2MzkyMjE4MDk= | 5,038 | Cannot save and load pretrained MarianTokenizer | {
"login": "erikchwang",
"id": 16256959,
"node_id": "MDQ6VXNlcjE2MjU2OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikchwang",
"html_url": "https://github.com/erikchwang",
"followers_url": "https://api.github.com/users/erikchwang/followers",
"following_url": "https://api.github.com/users/erikchwang/following{/other_user}",
"gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions",
"organizations_url": "https://api.github.com/users/erikchwang/orgs",
"repos_url": "https://api.github.com/users/erikchwang/repos",
"events_url": "https://api.github.com/users/erikchwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikchwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" #4371 "
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
I want to save a pretrained MarianTokenizer to a local directory ("my_dir") and then load it:
> import transformers
> transformers.AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir")
> tokenizer = transformers.AutoTokenizer.from_pretrained("my_dir")
But the above code failed:
> Traceback (most recent call last):
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 239, in get_config_dict
> local_files_only=local_files_only,
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/file_utils.py", line 267, in cached_path
> raise EnvironmentError("file {} not found".format(url_or_filename))
> OSError: file my_dir/config.json not found
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
> File "<input>", line 1, in <module>
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 195, in from_pretrained
> config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_auto.py", line 196, in from_pretrained
> config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict
> raise EnvironmentError(msg)
> OSError: Can't load config for 'my_dir'. Make sure that:
> - 'my_dir' is a correct model identifier listed on 'https://huggingface.co/models'
> - or 'my_dir' is the correct path to a directory containing a config.json file
> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5038/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5037/comments | https://api.github.com/repos/huggingface/transformers/issues/5037/events | https://github.com/huggingface/transformers/issues/5037 | 639,221,099 | MDU6SXNzdWU2MzkyMjEwOTk= | 5,037 | The correct way to save and load pretrained MarianTokenizer? | {
"login": "erikchwang",
"id": 16256959,
"node_id": "MDQ6VXNlcjE2MjU2OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikchwang",
"html_url": "https://github.com/erikchwang",
"followers_url": "https://api.github.com/users/erikchwang/followers",
"following_url": "https://api.github.com/users/erikchwang/following{/other_user}",
"gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions",
"organizations_url": "https://api.github.com/users/erikchwang/orgs",
"repos_url": "https://api.github.com/users/erikchwang/repos",
"events_url": "https://api.github.com/users/erikchwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikchwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" #4371"
] | 1,592 | 1,592 | 1,592 | NONE | null | I want to save a pretrained MarianTokenizer to a local directory ("my_dir") and then load it:
> import transformers
> transformers.AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir")
> tokenizer = transformers.AutoTokenizer.from_pretrained("my_dir")
But the above code failed:
> Traceback (most recent call last):
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 239, in get_config_dict
> local_files_only=local_files_only,
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/file_utils.py", line 267, in cached_path
> raise EnvironmentError("file {} not found".format(url_or_filename))
> OSError: file my_dir/config.json not found
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
> File "<input>", line 1, in <module>
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 195, in from_pretrained
> config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_auto.py", line 196, in from_pretrained
> config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
> File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict
> raise EnvironmentError(msg)
> OSError: Can't load config for 'my_dir'. Make sure that:
> - 'my_dir' is a correct model identifier listed on 'https://huggingface.co/models'
> - or 'my_dir' is the correct path to a directory containing a config.json file
> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5037/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5036/comments | https://api.github.com/repos/huggingface/transformers/issues/5036/events | https://github.com/huggingface/transformers/pull/5036 | 639,218,116 | MDExOlB1bGxSZXF1ZXN0NDM0ODQ3Nzkw | 5,036 | Refactor Code samples; Test code samples | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=h1) Report\n> Merging [#5036](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `97.44%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5036 +/- ##\n==========================================\n+ Coverage 79.08% 79.30% +0.22% \n==========================================\n Files 138 138 \n Lines 24078 24265 +187 \n==========================================\n+ Hits 19041 19243 +202 \n+ Misses 5037 5022 -15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (ø)` | |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `97.05% <ø> (ø)` | |\n| ... and [50 more](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=footer). Last update [24f46ea...a9bb134](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is amazing! This way we won't do as many mistakes while copy-pasting code for introducing those task-specific models :-)"
] | 1,592 | 1,593 | 1,593 | MEMBER | null | Refactoring the code samples in order to prevent copy/pasting the same code samples across classes while updating the model/tokenizer classes and checkpoint names.
- All models now have their docstrings updated.
- Doctest is used for testing
- Fixed a bunch of bugs in all docstrings as well as a few models. All non-cosmetic changes are highlighted below. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5036/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5036",
"html_url": "https://github.com/huggingface/transformers/pull/5036",
"diff_url": "https://github.com/huggingface/transformers/pull/5036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5036.patch",
"merged_at": 1593117960000
} |
https://api.github.com/repos/huggingface/transformers/issues/5035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5035/comments | https://api.github.com/repos/huggingface/transformers/issues/5035/events | https://github.com/huggingface/transformers/pull/5035 | 639,211,774 | MDExOlB1bGxSZXF1ZXN0NDM0ODQyMzQ5 | 5,035 | update for roberta and xlm | {
"login": "bsinghpratap",
"id": 7297516,
"node_id": "MDQ6VXNlcjcyOTc1MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7297516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bsinghpratap",
"html_url": "https://github.com/bsinghpratap",
"followers_url": "https://api.github.com/users/bsinghpratap/followers",
"following_url": "https://api.github.com/users/bsinghpratap/following{/other_user}",
"gists_url": "https://api.github.com/users/bsinghpratap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bsinghpratap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bsinghpratap/subscriptions",
"organizations_url": "https://api.github.com/users/bsinghpratap/orgs",
"repos_url": "https://api.github.com/users/bsinghpratap/repos",
"events_url": "https://api.github.com/users/bsinghpratap/events{/privacy}",
"received_events_url": "https://api.github.com/users/bsinghpratap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | Two changes are done:
Update the inputs with langs during training and evaluation.
Update the token_type_ids for Roberta otherwise it throws an error while creating features. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5035",
"html_url": "https://github.com/huggingface/transformers/pull/5035",
"diff_url": "https://github.com/huggingface/transformers/pull/5035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5035.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5034/comments | https://api.github.com/repos/huggingface/transformers/issues/5034/events | https://github.com/huggingface/transformers/pull/5034 | 639,203,069 | MDExOlB1bGxSZXF1ZXN0NDM0ODM0NTc2 | 5,034 | Training & fine-tuning quickstart | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=h1) Report\n> Merging [#5034](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5034 +/- ##\n==========================================\n+ Coverage 79.08% 79.10% +0.02% \n==========================================\n Files 138 138 \n Lines 24078 24078 \n==========================================\n+ Hits 19041 19046 +5 \n+ Misses 5037 5032 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.77% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=footer). Last update [24f46ea...3ffd35b](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten This is a little bit less verbose than what I think you were envisioning but curious what you think. I just showed sequence classification to communicate the general principles rather than covering multiple different tasks, which would make things pretty long.",
"I think it's great! At the moment we can't really show causal lm training because it's not implemented yet in TF :D So sequence classification sounds good to me! \r\n\r\nIn the longer term, think we should have one section for each model type for both TF and PT:\r\n\r\n- Causal LM\r\n\r\n- Masked LM\r\n\r\n- Seq2Seq\r\n\r\n- Seq classifaction\r\n\r\n....\r\n\r\nBut for now I like it! \r\n\r\n",
"I don't think, the section would become too long if we show training for every model type (CLM, MLM, Seq2Seq, ...). If we would start with CLM and in the following sections only add a couple of sentences explaining what should be done differently for MLM *e.g.* and so on I don't think the page becomes too long.\r\n\r\nNevertheless, I'm still wondering if we should have a training section on each model page since training can differ quite a lot between models: XLNet has a very special training scheme, T5 pretraining is different from Bart, Longformer has a special global attention that has to be set, ... => what do you think about this @sgugger @joeddav ? ",
"I personally think this is okay if each task is shown in a different notebook/tutorial (there is a big table of tasks after all). When we are at a point where all tasks can be easily loaded in a few lines of code we can maybe show more, but I fear that the specificity of each task/dataset requiring its own preprocessing function will lose the reader when the essential point of this (beginner's) tutorial is on training and Trainer/TFTrainer.\r\n\r\nHaving an example on each model page may also be problematic since models can be used for several tasks. So it might turn up in having way too many things in the docs as well. For now I think making more independent notebooks that show how to train/fine-tune a model on a given task and link to those in all the right places might be the best solution. That way the reader opts in to see this model trained on that task.",
"I think it's a legitimate question how much guidance we should give for more obscure cases that you mentioned, @patrickvonplaten. My feeling is that those are fairly specialized and it's fair to expect users to be able to figure out more advanced cases like that out between docs/source code/table of tasks. I wouldn't be opposed to incorporating some of the more common tasks here (e.g. MLM training), but I generally agree with @sgugger to err on the side of brevity and clarity for the purpose of a quickstart guide like this one. Then we can lean on a combination of model docs and the big table of tasks for the more obscure/specialized cases.\r\n\r\nAlso, it would help to have docs for Trainer."
] | 1,592 | 1,598 | 1,593 | CONTRIBUTOR | null | This PR adds a short guide to the docs demonstrating how to train & fine-tune models in native PyTorch, native TF2, and using Trainer.
My aim was not to show how to train on every type of task, but simply to communicate the key points along with a couple of very simple and easy-to-follow examples. More involved examples spanning many tasks are linked at the bottom. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5034/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5034",
"html_url": "https://github.com/huggingface/transformers/pull/5034",
"diff_url": "https://github.com/huggingface/transformers/pull/5034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5034.patch",
"merged_at": 1593119471000
} |
https://api.github.com/repos/huggingface/transformers/issues/5033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5033/comments | https://api.github.com/repos/huggingface/transformers/issues/5033/events | https://github.com/huggingface/transformers/pull/5033 | 639,199,741 | MDExOlB1bGxSZXF1ZXN0NDM0ODMxNDk2 | 5,033 | update for roberta and xlm | {
"login": "bsinghpratap",
"id": 7297516,
"node_id": "MDQ6VXNlcjcyOTc1MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7297516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bsinghpratap",
"html_url": "https://github.com/bsinghpratap",
"followers_url": "https://api.github.com/users/bsinghpratap/followers",
"following_url": "https://api.github.com/users/bsinghpratap/following{/other_user}",
"gists_url": "https://api.github.com/users/bsinghpratap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bsinghpratap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bsinghpratap/subscriptions",
"organizations_url": "https://api.github.com/users/bsinghpratap/orgs",
"repos_url": "https://api.github.com/users/bsinghpratap/repos",
"events_url": "https://api.github.com/users/bsinghpratap/events{/privacy}",
"received_events_url": "https://api.github.com/users/bsinghpratap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"check_code_quality failed. Updating and sending a new request."
] | 1,592 | 1,592 | 1,592 | NONE | null | Two changes are done:
1. Update the inputs with langs during training and evaluation.
2. Update the token_type_ids for Roberta otherwise it throws an error while creating features. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5033",
"html_url": "https://github.com/huggingface/transformers/pull/5033",
"diff_url": "https://github.com/huggingface/transformers/pull/5033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5033.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5032/comments | https://api.github.com/repos/huggingface/transformers/issues/5032/events | https://github.com/huggingface/transformers/pull/5032 | 639,189,675 | MDExOlB1bGxSZXF1ZXN0NDM0ODIyMjkx | 5,032 | Add DistilBertForMultipleChoice | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=h1) Report\n> Merging [#5032](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5032 +/- ##\n==========================================\n+ Coverage 77.19% 77.22% +0.03% \n==========================================\n Files 128 128 \n Lines 21877 21906 +29 \n==========================================\n+ Hits 16888 16918 +30 \n+ Misses 4989 4988 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.70% <100.00%> (+0.20%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=footer). Last update [bbad4c6...fdafefb](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | Another missing model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5032/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5032",
"html_url": "https://github.com/huggingface/transformers/pull/5032",
"diff_url": "https://github.com/huggingface/transformers/pull/5032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5032.patch",
"merged_at": 1592260301000
} |
https://api.github.com/repos/huggingface/transformers/issues/5031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5031/comments | https://api.github.com/repos/huggingface/transformers/issues/5031/events | https://github.com/huggingface/transformers/pull/5031 | 639,147,997 | MDExOlB1bGxSZXF1ZXN0NDM0Nzg0NDgz | 5,031 | Some changes to simplify the generation function | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=h1) Report\n> Merging [#5031](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `90.32%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5031 +/- ##\n==========================================\n+ Coverage 77.24% 77.26% +0.01% \n==========================================\n Files 133 133 \n Lines 22146 22128 -18 \n==========================================\n- Hits 17107 17097 -10 \n+ Misses 5039 5031 -8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <88.88%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.25% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.16% <100.00%> (-0.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.62%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=footer). Last update [7291ea0...f507cd7](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Okey I took a deeper look at the PR! Quite hard to see what's all going on there, so a nice refactoring would be very welcome :-). \r\n\r\nI very much like the small changes that are done here that clearly improve the readability and clean the code.\r\n\r\nThe big change of unifying the score computation I am not yet 100% convinced it helps a lot. I agree that we should refactor the generate() function, but not sure whether this is going in the right direction. \r\n\r\nPro's\r\n- The function reduces duplicated code\r\n\r\nCon's\r\n- The function adds more computational cost to the `.generate()` function, which is already quite expensive and heavily used in GPT2 by an additional softmax function. Thinking about how big the output embedding matrices are, this could be significant no?\r\n\r\nI do agree though that we should unify all these functions that are applied to the scores.\r\nIMO, we have to very careful with everything that touches sampling in `_no_generate_beam_search` (greedy decoding is less used here) and everything that touches `argmax` in `_generate_beam_search` (summarization and translation rely on that).\r\n\r\nMy proposal would be the following: \r\nLet's unify all functions that are applied after the `F.log_softmax(next_token_logits, dim=-1)` line into one function and that expects the scores for beam search and the logits for no beam search (I like @sshleifer's naming - I would just say `postprocess_next_token_scores`). This function should be independent of wheter we sample or use the argmax:\r\n\r\n```python\r\ndef postprocess_next_token_scores(\r\n self,\r\n scores,\r\n input_ids,\r\n batch_size,\r\n num_beams,\r\n no_repeat_ngram_size,\r\n bad_words_ids,\r\n cur_len,\r\n min_length,\r\n eos_token_id,\r\n repetition_penalty,\r\n ):\r\n \r\n # repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)\r\n if repetition_penalty != 1.0:\r\n self.enforce_repetition_penalty_(\r\n next_token_logits, batch_size, num_beams, input_ids, repetition_penalty,\r\n )\r\n \r\n # set eos token prob to zero if min_length is not reached\r\n if eos_token_id is not None and cur_len < min_length:\r\n scores[:, eos_token_id] = -float(\"inf\")\r\n\r\n if no_repeat_ngram_size > 0:\r\n # calculate a list of banned tokens to prevent repetitively generating the same ngrams\r\n num_batch_hypotheses = batch_size * num_beams\r\n # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345\r\n banned_batch_tokens = calc_banned_ngram_tokens(\r\n input_ids, num_batch_hypotheses, no_repeat_ngram_size, cur_len\r\n )\r\n for i, banned_tokens in enumerate(banned_batch_tokens):\r\n scores[i, banned_tokens] = -float(\"inf\")\r\n\r\n if bad_words_ids is not None:\r\n # calculate a list of banned tokens according to bad words\r\n banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)\r\n\r\n for i, banned_tokens in enumerate(banned_tokens):\r\n scores[i, banned_tokens] = -float(\"inf\")\r\n\r\n return scores\r\n```\r\n\r\nThis way for 1) we have a function that is applicable for both sampling and greedy search and only should contain function that do so and 2) there is no additional computation cost for the softmax.",
"The temperature function should then for both beam search and no beam search only be applied in the `if do_sample=True` statement (no need to do this for argmax).\r\n\r\nNow, the only thing that will slightly change with this function is that the CTRL enforce penalty for beam search decoding. I tried the new order out on a couple of tensors and the changes are minimal. Also, the repetition penalty is very hacky anyways and we already changed the function from its original formula of the CTRL paper. Also, I haven't seen that anybody used the function really for beam search. What do you think @sshleifer?.",
"`repetition_penalty!=1` only used by CTRL, `model_cards/mrm8488/t5-base-finetuned-summarize-news/README.md:` and `model_cards/gaochangkuan/model_dir/README.md` \r\nSo I think minimal changes to when it is computed are fine.",
"I changed the `finalize_logits` to @patrickvonplaten 's suggested `postprocess_next_token_scores`\r\n\r\nFollowing @sshleifer 's sage advice, I'm leaving the BART starting hack and the temperature in the main body of the generate function for now, will leave dealing with those for a future PR :) \r\n",
"Great, I'm happy with the PR - I think it already makes generate a bunch more readable. \r\nCan we note in the PR description, that we have slight breaking changes for beam search sampling when running with the repetition penalty? and it would be nice to make the function call more robust by using keyword arguments the same way it is done with `_no_beam_search_generate()`\r\n\r\n",
"> Great, I'm happy with the PR - I think it already makes generate a bunch more readable.\r\n> Can we note in the PR description, that we have slight breaking changes for beam search sampling when running with the repetition penalty? and it would be nice to make the function call more robust by using keyword arguments the same way it is done with `_no_beam_search_generate()`\r\n\r\nDone and done, will merge today.",
"In future, we also need to run \r\n```bash\r\npytest RUN_SLOW=1 tests/test_modeling_marian.py\r\n```"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | This PR proposes to simplify generation in `modeling_utils.py` in the following ways:
1. Removing some redundant code in `_generate_no_beam_search`: finished sequences are padded at generation time, and do not need to be padded again before returning
2. Initializes the cache in its permanent form directly for both `_generate_beam_search` and `_generate_no_beam_search`: this removes the need for a first step test in `modeling_bart.py` and `modeling_t5.py` (and presumably future cached seq2seq architectures)
3. Took all of the logit post-processing out of `_generate_beam_search` and `_generate_no_beam_search` and put it in a single `finalize_generation_logscores` function that can be used in both instead of duplicating the code
The following slow test pass in addition to the basic suite:
```
RUN_SLOW=1 pytest tests/test_modeling_bart.py
RUN_SLOW=1 pytest tests/test_modeling_gpt2.py
RUN_SLOW=1 pytest tests/test_modeling_t5.py
```
#### Small breaking changes
1. The previous versionof `_generate_no_beam_search` seemed to be adding padding twice. We removed is as is seemed redundant, but noting it here just in case.
2. This PR moves the addition of the CTRL length penalty from before to after the `log_softmax` in `_generate_beam_search`. This changes scores a little bit but apparently doesn't drastically alter model behavior. Per @patrickvonplaten :
> Now, the only thing that will slightly change with this function is that the CTRL enforce penalty for beam search decoding. I tried the new order out on a couple of tensors and the changes are minimal. Also, the repetition penalty is very hacky anyways and we already changed the function from its original formula of the CTRL paper. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5031/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5031",
"html_url": "https://github.com/huggingface/transformers/pull/5031",
"diff_url": "https://github.com/huggingface/transformers/pull/5031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5031.patch",
"merged_at": 1592419686000
} |
https://api.github.com/repos/huggingface/transformers/issues/5030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5030/comments | https://api.github.com/repos/huggingface/transformers/issues/5030/events | https://github.com/huggingface/transformers/pull/5030 | 639,130,669 | MDExOlB1bGxSZXF1ZXN0NDM0NzY5MzE0 | 5,030 | Update pipeline examples to doctest syntax | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=h1) Report\n> Merging [#5030](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5030 +/- ##\n==========================================\n- Coverage 77.19% 77.12% -0.08% \n==========================================\n Files 128 128 \n Lines 21877 21877 \n==========================================\n- Hits 16888 16872 -16 \n- Misses 4989 5005 +16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.34% <0.00%> (-0.24%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=footer). Last update [bbad4c6...db3cb29](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I didn't know this feature, this is pretty cool"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | Also fix the values to what's returned. This way we can run
```
python -m doctest README.md
```
to test this codes produces the exact same results. Not sure if we want to include this in our CI in some way.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5030",
"html_url": "https://github.com/huggingface/transformers/pull/5030",
"diff_url": "https://github.com/huggingface/transformers/pull/5030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5030.patch",
"merged_at": 1592345698000
} |
https://api.github.com/repos/huggingface/transformers/issues/5029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5029/comments | https://api.github.com/repos/huggingface/transformers/issues/5029/events | https://github.com/huggingface/transformers/pull/5029 | 639,127,257 | MDExOlB1bGxSZXF1ZXN0NDM0NzY2MzQ1 | 5,029 | Add reference to NLP (package) dataset | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=h1) Report\n> Merging [#5029](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5029 +/- ##\n==========================================\n- Coverage 77.19% 77.18% -0.01% \n==========================================\n Files 128 128 \n Lines 21877 21877 \n==========================================\n- Hits 16888 16886 -2 \n- Misses 4989 4991 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=footer). Last update [bbad4c6...0df885f](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I forgot to add the meta tag:\r\ndatasets:\r\n- squad_v2\r\n\r\nSorry",
"added the metadata, @mrm8488 "
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5029/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5029",
"html_url": "https://github.com/huggingface/transformers/pull/5029",
"diff_url": "https://github.com/huggingface/transformers/pull/5029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5029.patch",
"merged_at": 1592295467000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5028/comments | https://api.github.com/repos/huggingface/transformers/issues/5028/events | https://github.com/huggingface/transformers/pull/5028 | 639,124,288 | MDExOlB1bGxSZXF1ZXN0NDM0NzY0MDEx | 5,028 | Add reference to NLP dataset | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"I forgot to add the meta tag:\r\ndatasets:\r\n- squad_v2\r\n\r\nSorry",
"same here"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5028",
"html_url": "https://github.com/huggingface/transformers/pull/5028",
"diff_url": "https://github.com/huggingface/transformers/pull/5028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5028.patch",
"merged_at": 1592295549000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5027/comments | https://api.github.com/repos/huggingface/transformers/issues/5027/events | https://github.com/huggingface/transformers/pull/5027 | 639,116,493 | MDExOlB1bGxSZXF1ZXN0NDM0NzU3MTYz | 5,027 | Remove old doc page and add note about cache in installation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=h1) Report\n> Merging [#5027](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5027 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 128 128 \n Lines 21877 21877 \n=======================================\n+ Hits 16888 16889 +1 \n+ Misses 4989 4988 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=footer). Last update [bbad4c6...2d3587c](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,597 | 1,592 | COLLABORATOR | null | As discussed offline, removing the old page "Loading Google AI or OpenAI pre-trained weights or PyTorch dump" and move the not about cache in the installation folder (also, making that up to date ^^). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5027/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5027",
"html_url": "https://github.com/huggingface/transformers/pull/5027",
"diff_url": "https://github.com/huggingface/transformers/pull/5027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5027.patch",
"merged_at": 1592327022000
} |
https://api.github.com/repos/huggingface/transformers/issues/5026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5026/comments | https://api.github.com/repos/huggingface/transformers/issues/5026/events | https://github.com/huggingface/transformers/issues/5026 | 639,087,241 | MDU6SXNzdWU2MzkwODcyNDE= | 5,026 | Error while trying to retrieve BERT embeddings for a custom task | {
"login": "adithya8",
"id": 19238403,
"node_id": "MDQ6VXNlcjE5MjM4NDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/19238403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adithya8",
"html_url": "https://github.com/adithya8",
"followers_url": "https://api.github.com/users/adithya8/followers",
"following_url": "https://api.github.com/users/adithya8/following{/other_user}",
"gists_url": "https://api.github.com/users/adithya8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adithya8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adithya8/subscriptions",
"organizations_url": "https://api.github.com/users/adithya8/orgs",
"repos_url": "https://api.github.com/users/adithya8/repos",
"events_url": "https://api.github.com/users/adithya8/events{/privacy}",
"received_events_url": "https://api.github.com/users/adithya8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! The error seems to come from your data processing. It's hard to help you without knowing what your inputs are, how `pad_sequence` works, and how you tokenize your inputs.\r\n\r\nIn recent transformers versions, the tokenizer can take care of truncating/padding and do so for input ids, attention masks and token type ids. Using the encode/encode_plus methods to do this would reduce the risk of errors when pre-processing.",
"Thank you for your response @LysandreJik \r\nI would like to add a few more details pertaining to this error here. \r\n`pad_sequence` is the method from [torch.nn.utils.rnn](https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pad_sequence.html#torch-nn-utils-rnn-pad-sequence). \r\nWith respect to using `encode/encode_plus`, we are processing the text into multiple segments when they go past the max tokens limit. Hence we would still need to process the output of `encode_plus` to fit it in the token limit. [Which is what we follow right now but using the `tokenize`, `convert_tokens_to_ids` and `create_token_type_ids_from_sequences`.\r\n\r\nThis error doesn't occur when I replace the the three lines of loading config, tokenizer and model (using `AutoConfig`,` AutoTokeizer` and `AutoModel` respectively) with just `BertTokenizer` and `BertModel` directly. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
## Information
I am using BERT [base-uncased].
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Here's the error:
```
I0615 15:19:52.531945 140468956133120 file_utils.py:41] PyTorch version 1.1.0 available.
I0615 15:19:53.723086 140468956133120 file_utils.py:57] TensorFlow version 2.0.0 available.
bert-base-cased
I0615 15:19:53.860575 140468956133120 configuration_utils.py:256] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /users2/user1/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.9da767be51e1327499df13488672789394e2ca38b877837e52618a67d7002391
I0615 15:19:53.861034 140468956133120 configuration_utils.py:292] Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": null,
"do_sample": false,
"eos_token_ids": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}
I0615 15:19:53.934896 140468956133120 configuration_utils.py:256] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at /users2/user1/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
I0615 15:19:53.935213 140468956133120 configuration_utils.py:292] Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": null,
"do_sample": false,
"eos_token_ids": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
I0615 15:19:54.006190 140468956133120 tokenization_utils.py:501] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /users2/user1/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0615 15:19:54.133543 140468956133120 modeling_utils.py:461] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin from cache at /users2/user1/.cache/torch/transformers/35d8b9d36faaf46728a0192d82bf7d00137490cd6074e8500778afed552a67e5.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2
BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": null,
"do_sample": false,
"eos_token_ids": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}
len(input_ids): 636
Num Batches: 40
---------------------------------------
---------------------------------------
---------------------------------------
Traceback (most recent call last):
File "./dlatkInterface.py", line 2020, in <module>
main()
File "./dlatkInterface.py", line 1013, in main
args.feattable = fe.addBERTTable_(modelName = args.bertmodel, aggregations=args.bertaggs, layersToKeep=args.bertlayers, noContext=args.bertnocontext, layerAggregations = args.bertlayeraggs, wordAggregations=args.transwordaggs, valueFunc = args.valuefunc)
File "/users2/user1/NLP/dlatk/dlatk/featureExtractor.py", line 1327, in addBERTTable_
encAllLayers = model(input_ids = input_ids_padded, attention_mask = attention_mask_padded, token_type_ids = token_type_ids_padded)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/users2/user1/.local/lib/python3.5/site-packages/transformers/modeling_bert.py", line 783, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/users2/user1/.local/lib/python3.5/site-packages/transformers/modeling_bert.py", line 173, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/sparse.py", line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:193
```
I am just trying to retrieve the embeddings from the layers that I want and store it in a list. Here's the code block that hits the error:
```
config = AutoConfig.from_pretrained(modelName, output_hidden_states=True)
tokenizer = AutoTokenizer.from_pretrained(tokenizerName)
model = AutoModel.from_pretrained(modelName, config=config)
cuda = False
model.eval()
batch_size=16
.
.
.
num_batches = int(np.ceil(len(input_ids)/batch_size))
encSelectLayers = []
print ('len(input_ids): ',len(input_ids))
print ('Num Batches:', num_batches)
for i in range(num_batches):
input_ids_padded = pad_sequence(input_ids[i*batch_size:(i+1)*batch_size], batch_first = True, padding_value=tokenizer.pad_token_id)
token_type_ids_padded = pad_sequence(token_type_ids[i*batch_size:(i+1)*batch_size], batch_first = True, padding_value=0)
attention_mask_padded = pad_sequence(attention_mask[i*batch_size:(i+1)*batch_size], batch_first = True, padding_value=0)
if cuda:
input_ids_padded = input_ids_padded.to('cuda')
token_type_ids_padded = token_type_ids_padded.to('cuda')
attention_mask_padded = attention_mask_padded.to('cuda')
input_ids_padded = input_ids_padded.long()
token_type_ids_padded = token_type_ids_padded.long()
attention_mask_padded = attention_mask_padded.long()
#print (input_ids_padded.shape, token_type_ids_padded.shape, attention_mask_padded.shape)
#print (input_ids_padded)
#print (token_type_ids_padded)
#print (attention_mask_padded)
print ('---------------------------------------')
with torch.no_grad():
encAllLayers = model(input_ids = input_ids_padded, attention_mask = attention_mask_padded, token_type_ids = token_type_ids_padded)
encAllLayers = encAllLayers[-1]
for lyr in layersToKeep: #Shape: (num Layers, num_batches, batch_size, max Seq len, 768)
encSelectLayers.append([encAllLayers[int(lyr)].detach().cpu().numpy()])
del encAllLayers
attention_mask_padded.shape)
print (np.array(encSelectLayers).shape)
```
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-4.4.0-171-generic-x86_64-with-debian-stretch-sid
- Python version: 3.5.2
- CUDA version: 10.1
- PyTorch version (GPU?): 1.1 (True)
- Tensorflow version (GPU?): 2.0 (True)
- Using GPU in script: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5026/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5026/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5025/comments | https://api.github.com/repos/huggingface/transformers/issues/5025/events | https://github.com/huggingface/transformers/pull/5025 | 639,086,544 | MDExOlB1bGxSZXF1ZXN0NDM0NzMxMDc1 | 5,025 | Convert hans to Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=h1) Report\n> Merging [#5025](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bf4098e03afaed2c6e3671c69fd57e9ac304752&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5025 +/- ##\n==========================================\n- Coverage 77.18% 77.09% -0.10% \n==========================================\n Files 128 128 \n Lines 21877 21877 \n==========================================\n- Hits 16886 16866 -20 \n- Misses 4991 5011 +20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-1.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.73% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=footer). Last update [1bf4098...201fae2](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This follows up from #4854 (@julien-c I took all your comments into account) and will close #4742. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5025",
"html_url": "https://github.com/huggingface/transformers/pull/5025",
"diff_url": "https://github.com/huggingface/transformers/pull/5025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5025.patch",
"merged_at": 1592309192000
} |
https://api.github.com/repos/huggingface/transformers/issues/5024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5024/comments | https://api.github.com/repos/huggingface/transformers/issues/5024/events | https://github.com/huggingface/transformers/pull/5024 | 639,064,651 | MDExOlB1bGxSZXF1ZXN0NDM0NzEyNTE3 | 5,024 | [Bart] Question Answering Model is added to tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=h1) Report\n> Merging [#5024](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bf4098e03afaed2c6e3671c69fd57e9ac304752&el=desc) will **decrease** coverage by `0.76%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5024 +/- ##\n==========================================\n- Coverage 77.18% 76.42% -0.77% \n==========================================\n Files 128 128 \n Lines 21877 21877 \n==========================================\n- Hits 16886 16719 -167 \n- Misses 4991 5158 +167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `36.80% <0.00%> (-3.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.17% <0.00%> (-1.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.67% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=footer). Last update [1bf4098...f36c3eb](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | As written in PR #4908, the Bart for QA was not added to the test suite. This PR fixes the output attentions test for encoder decoder QA models.
If we would have named tuples, such a test could be made much much cleaner. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5024",
"html_url": "https://github.com/huggingface/transformers/pull/5024",
"diff_url": "https://github.com/huggingface/transformers/pull/5024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5024.patch",
"merged_at": 1592254209000
} |
https://api.github.com/repos/huggingface/transformers/issues/5023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5023/comments | https://api.github.com/repos/huggingface/transformers/issues/5023/events | https://github.com/huggingface/transformers/issues/5023 | 639,040,670 | MDU6SXNzdWU2MzkwNDA2NzA= | 5,023 | Multi class classification using Reformer Model | {
"login": "as-stevens",
"id": 61624036,
"node_id": "MDQ6VXNlcjYxNjI0MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/61624036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/as-stevens",
"html_url": "https://github.com/as-stevens",
"followers_url": "https://api.github.com/users/as-stevens/followers",
"following_url": "https://api.github.com/users/as-stevens/following{/other_user}",
"gists_url": "https://api.github.com/users/as-stevens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/as-stevens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/as-stevens/subscriptions",
"organizations_url": "https://api.github.com/users/as-stevens/orgs",
"repos_url": "https://api.github.com/users/as-stevens/repos",
"events_url": "https://api.github.com/users/as-stevens/events{/privacy}",
"received_events_url": "https://api.github.com/users/as-stevens/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In the reformer projects there is only an card for QA task.\r\nNothing about reformerforsequenceclassification or other model heads.\r\n@patrickvonplaten has assigned the tasks to himself.\r\nAre there any more steps to do when I want to pretrain reformer on larger datasets using the notebook ?\r\nSpecifically we would like to train it on c4 dataset as described in the open tasks.",
"**@flozi00** thank you for the reply. So there is no way at this point that I can implement multi-class classification using Reformer? Or is there a workaround that I can use.",
"@as-stevens you could try to write the classification head yourself.\nAt the moment there is no ready to use solution given.",
"There are currently no pretrained weights for a bidirectional reformer, so adding a QA model extension is not a high priority at the moment. PRs with a clean implementation of ReformerForQA would be welcome :-) ",
"> @as-stevens you could try to write the classification head yourself.\r\n> At the moment there is no ready to use solution given.\r\n\r\n**@flozi00** could point some reference that I can use to write a classification head?",
"I have free computing capacities to train bidirectional reformer model on larger datasets, but no time to do so.\nAny advice or ready to use scripts to do so ? @patrickvonplaten ",
"@flozi00 - this sounds nice! Let me come back to you on this. There is currently no script to do so, but I will think about it :-)",
"@patrickvonplaten we could talk about the details in private chat ?\nThe development should not pause cause missing resources",
"> > @as-stevens you could try to write the classification head yourself.\n> > At the moment there is no ready to use solution given.\n> \n> **@flozi00** could point some reference that I can use to write a classification head?\n\n1. https://github.com/ThilinaRajapakse/simpletransformers/blob/master/simpletransformers/custom_models/models.py\n\n2. https://github.com/ThilinaRajapakse/simpletransformers/tree/master/simpletransformers/classification/transformer_models",
"> > > @as-stevens you could try to write the classification head yourself.\r\n> > > At the moment there is no ready to use solution given.\r\n> > \r\n> > \r\n> > **@flozi00** could point some reference that I can use to write a classification head?\r\n> \r\n> 1. https://github.com/ThilinaRajapakse/simpletransformers/blob/master/simpletransformers/custom_models/models.py\r\n> 2. https://github.com/ThilinaRajapakse/simpletransformers/tree/master/simpletransformers/classification/transformer_models\r\n\r\n** @flozi00 ** Thank you for sharing the above the links, let me go through the links and try to understand the flow/architecture. I am new to this and would appreciate any other pointers as well.",
"**@flozi00** Please let me know who do I verify my changes. By creating the model on a classification data set or is there a way better way to get the code verified? I am new to this hence\r\nsome of my questions may seem basic.",
"Just open an pull request with your changes.\npatrickvonplaten is the author of the implementation of reformer in this repository, I think he would have a look on it.\n\nTraining the classification model on an dataset would be a good proof that it is working.",
"@patrickvonplaten \r\nI have implemented the ReformerForSequenceClassification and ReformerForClassificationHead. I have taken RobertaForSequenceClassification and other classification head as a reference.\r\nFurther, I have not opened a pull request as I wanted to make sure that I have a working sample code before I raise a PR. Test before raising a PR. I am using the IMDB review dataset.\r\n\r\nThe link to the collab;\r\nhttps://colab.research.google.com/drive/1KFsQxLqsMB6vBF4_bRmTFGhdGwkgx0zI?usp=sharing\r\n\r\nThe 3rd cell has all the code related to the reformer classification head.\r\nFurther, I am getting;\r\n**AssertionError: If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2048. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape**.\r\n\r\nI looked at https://github.com/huggingface/transformers/issues/4565 but that looks like it is an LM model and could find any solution for the same.\r\nAny thoughts/suggestions?",
"From the docs\n```\nIn practice, the parameter config.axial_pos_embds_dim is set to list(d1,d2)(d1,d2) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list(n1s,n2s)(ns1,ns2) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids.\n```\n\nThe axial pos shape is new in reformer model.\nThe product of it values have to equal to the sequence length.\nYou could change the sequence length or set the axial pos shape to the right values. In this case it could be an list of (32,64)",
"> From the docs\r\n> \r\n> ```\r\n> In practice, the parameter config.axial_pos_embds_dim is set to list(d1,d2)(d1,d2) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list(n1s,n2s)(ns1,ns2) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids.\r\n> ```\r\n> \r\n> The axial pos shape is new in reformer model.\r\n> The product of it values have to equal to the sequence length.\r\n> You could change the sequence length or set the axial pos shape to the right values. In this case it could be an list of (32,64)\r\n\r\nWhen I try to change the axial position shape(assuming max sequence length to be 512), I get error\r\n\r\n---> 12 model = ReformerForSequenceClassification.from_pretrained('google/reformer-enwik8', num_labels = 2, axial_pos_shape= (16,32))\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 751 raise RuntimeError(\r\n 752 \"Error(s) in loading state_dict for {}:\\n\\t{}\".format(\r\n--> 753 model.__class__.__name__, \"\\n\\t\".join(error_msgs)\r\n 754 )\r\n 755 )\r\n\r\nRuntimeError: Error(s) in loading state_dict for ReformerForSequenceClassification:\r\n\t**size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 256]).\r\n\tsize mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 512, 768]) from checkpoint, the shape in current model is torch.Size([1, 32, 768]).**\r\n",
"> > From the docs\r\n> > ```\r\n> > In practice, the parameter config.axial_pos_embds_dim is set to list(d1,d2)(d1,d2) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list(n1s,n2s)(ns1,ns2) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids.\r\n> > ```\r\n> > \r\n> > \r\n> > The axial pos shape is new in reformer model.\r\n> > The product of it values have to equal to the sequence length.\r\n> > You could change the sequence length or set the axial pos shape to the right values. In this case it could be an list of (32,64)\r\n> \r\n> When I try to change the axial position shape(assuming max sequence length to be 512), I get error\r\n> \r\n> ---> 12 model = ReformerForSequenceClassification.from_pretrained('google/reformer-enwik8', num_labels = 2, axial_pos_shape= (16,32))\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n> 751 raise RuntimeError(\r\n> 752 \"Error(s) in loading state_dict for {}:\\n\\t{}\".format(\r\n> --> 753 model.**class**.**name**, \"\\n\\t\".join(error_msgs)\r\n> 754 )\r\n> 755 )\r\n> \r\n> RuntimeError: Error(s) in loading state_dict for ReformerForSequenceClassification:\r\n> **size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 256]). size mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 512, 768]) from checkpoint, the shape in current model is torch.Size([1, 32, 768]).**\r\n\r\n@flozi00 any idea what may be wrong?",
"Sorry, I have pretty much to do at the moment.\nI will have a look on it, but in worst case it take time up to Sunday evening in MEZ Timezone.\nMaybe someone else can answer you earlier",
"> Sorry, I have pretty much to do at the moment.\r\n> I will have a look on it, but in worst case it take time up to Sunday evening in MEZ Timezone.\r\n> Maybe someone else can answer you earlier\r\n\r\nThank you so much for the quick response! I appreciate it. I am just wondering how else could I get some one to throw light on this issue.\r\n",
"Just link someone from huggingface team to this as done with patrickvonplaten earlier.\nin my experience the team is very nice and helpful all the time.\nMaybe you should open an PR with your code and write [WIP] in front of it, so it gets better seen than an issue and you could get faster help by more people",
"> Just link someone from huggingface team to this as done with patrickvonplaten earlier.\r\n> in my experience the team is very nice and helpful all the time.\r\n> Maybe you should open an PR with your code and write [WIP] in front of it, so it gets better seen than an issue and you could get faster help by more people\r\n\r\nMakes sense, Thank you for the advice!",
"Hey, sorry.\r\nI still found no time to have a look on your colab notebook.\r\nDid you opened the pull request?",
"> Hey, sorry.\r\n> I still found no time to have a look on your colab notebook.\r\n> Did you opened the pull request?\r\n\r\n@flozi00 I created a pull request. https://github.com/huggingface/transformers/pull/5198\r\nPlease have a look at it and provide your feedback/suggestions.",
"Hey, just have seen that you already got feedback.\r\nI still didn't had time to run your code cause my calendar is very full until Friday, sorry",
"> Hey, just have seen that you already got feedback.\r\n> I still didn't had time to run your code cause my calendar is very full until Friday, sorry\r\n\r\nHey, I got caught up with work, could not reply earlier!\r\n\r\nI received the feedback and I have implemented the initial feedback as well. Trying to implement the test case for classification head, it is taking time as I need to understand the underlying test framework and also the architecture of the test cases.\r\nFurther, as the implementation of the classification head, did not had much review comment I decided to test the changes on the IMDB Dataset, but I have not been successful! I am getting CUDA error! link to the notebook;\r\n\r\nhttps://colab.research.google.com/drive/1KFsQxLqsMB6vBF4_bRmTFGhdGwkgx0zI#scrollTo=vOyStELCX8VA&uniqifier=2",
"https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-alloc-failed-when-calling-cublascreate-handle/78545/6\n\nThere are similar issues here.\nIt seems like there is something out of bounds / out of index range",
"> https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-alloc-failed-when-calling-cublascreate-handle/78545/6\r\n> \r\n> There are similar issues here.\r\n> It seems like there is something out of bounds / out of index range\r\n\r\n@flozi00 \r\nThough the issue was different, I was able to solve the issue, and finally, the classification head is working, Thanks\r\n\r\nFurther, I am trying to play with the sequence length parameter but the model throws an error;\r\nposted a separate issue https://github.com/huggingface/transformers/issues/5320 for the same. Please let me know if you have an idea about this one.\r\n\r\nps: I will update the notebook once I am done with the sequence length setting.",
"Any Idea how I can get sentence representation using Reformer Model? ( 1, 1024 ) shape using reformer-enwik8?\r\n\r\nThanks!",
"you can just use the output of `ReformerModel` no? ",
"@patrickvonplaten It's returning sentences generation output instead of vector?",
"@patrickvonplaten \r\nThis post is a little longer, I appreciate your time and sorry for the long post! But I have been trying to make the classifier work and hence could help myself.\r\n\r\nI have implemented a classification model using the Plain Reformer model,\r\nLink to collab notebook.\r\n[https://colab.research.google.com/drive/1l61NccWTGMfNFPj1T8kvMjnik2iEQWfe?usp=sharing](url)\r\nI have used the pre-trained crime and punishment (CP) tokenizer sequence tokenization. But, I am not able to improve the accuracy of the model form **~50%**, Which is equal to random classification as it is a binary classifier. I tried to play around with the learning rate, batch size, epochs, and sequence length but it does not help. \r\n\r\nI have implemented a classifier using Roberta and that seems to work fine, giving me an accuracy of ~94%. \r\n[https://colab.research.google.com/drive/10vv8YgwJzbKDpd0Q-pXupP86b1pOZJg8?usp=sharing](url)\r\nSo, I started comparing the difference between the two.\r\n\r\n- For Reformer, I am not able to use any existing pre-trained model for fine-tuning unlike Roberta.\r\n- The output of the tokenizer.tokenize(<sentence>) is also much different in both the cases. I mean in the case of Roberta the sentence get tokenized more or less in words, while in case of Reformer the sentence is mostly broken down in characters except for very common words like the, and, it ..etc\r\n\r\n`Roberta output: '<s>', 'ĠOne', 'Ġof', 'Ġthe', 'Ġother', 'Ġreviewers', 'Ġhas', \r\nReformer output: '▁', '<', 's', '>', '▁', 'O', 'n', 'e', '▁of', '▁the', '▁o', 'ther', '▁re', 'v', 'i', 'e', 'w', 'er',\r\n`\r\n\r\nSo, I am wondering if the CP tokenizer still needs to be trained on larger data.\r\nI tried to use the pre-trained XL tokenizer as that is also a sentence piece tokenization but that stated giving me memory issues. Can, I use a different classifier, I read in one of the blogs that tokenizer is a by-product of the model training. Hence, it is tried to the pre-trained model.\r\n\r\nAre there any pre-trained weights available for Reformer to be used to fine-tune for classification. Like it is there for other models, If not; do we have anything planned for it or not?\r\n\r\nThanks\r\nAmit\r\n\r\n\r\n"
] | 1,592 | 1,605 | 1,596 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi All:
I am trying to implement a **multi-class classification** using the **ReformerModel, ReformerModelWithLMHead** But I don't see any API implementation for the same.
I have 10+ class of text data and wanted to use the pre-trained ReformerModel, ReformerModelWithLMHead to classify the text.
I see classes like RobertaForSequenceClassification have support for text classification but could not find any for Reformer.
Please, let me if it is implemented in the Reformer model or is it work in progress. I tried to find any implementation for the same but could not find any.
ps: I am referring to this paper https://web.stanford.edu/class/cs224n/reports/custom/report21.pdf
where they have implemented text classification using Reformer.
Thank you
Amit
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5022/comments | https://api.github.com/repos/huggingface/transformers/issues/5022/events | https://github.com/huggingface/transformers/issues/5022 | 639,008,090 | MDU6SXNzdWU2MzkwMDgwOTA= | 5,022 | Latest merge [Benchmark] Memory benchmark utils #4198 fails at Windows | {
"login": "songsuoyuan",
"id": 1378976,
"node_id": "MDQ6VXNlcjEzNzg5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1378976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songsuoyuan",
"html_url": "https://github.com/songsuoyuan",
"followers_url": "https://api.github.com/users/songsuoyuan/followers",
"following_url": "https://api.github.com/users/songsuoyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/songsuoyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songsuoyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songsuoyuan/subscriptions",
"organizations_url": "https://api.github.com/users/songsuoyuan/orgs",
"repos_url": "https://api.github.com/users/songsuoyuan/repos",
"events_url": "https://api.github.com/users/songsuoyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/songsuoyuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for noting that - you're 100% correct :-) "
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using: Bert
## To reproduce
Steps to reproduce the behavior:
1. Windows
2. Install transformers from source
## Environment info
python = 3.7.6
pytorch = 1.5
cuda = 10.2
I think the problem happens because in code:
https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py
`from .benchmark import PyTorchBenchmark, PyTorchBenchmarkArguments`
and in code:
https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_utils.py
there is `from signal import SIGKILL`
> signal.SIGKILL
>
> Kill signal.
> It cannot be caught, blocked, or ignored.
> Availability: Unix.
I think it will fail for Windows if you try `from signal import SIGKILL`.
@patrickvonplaten
https://github.com/huggingface/transformers/pull/4198 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5022/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5021/comments | https://api.github.com/repos/huggingface/transformers/issues/5021/events | https://github.com/huggingface/transformers/pull/5021 | 638,999,645 | MDExOlB1bGxSZXF1ZXN0NDM0NjU5OTg3 | 5,021 | Add position_ids in TFElectra models docstring | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=h1) Report\n> Merging [#5021](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1affde2f10c653e36601dd7a3e6a2525ae7ced57&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5021 +/- ##\n=======================================\n Coverage 77.26% 77.27% \n=======================================\n Files 128 128 \n Lines 21847 21847 \n=======================================\n+ Hits 16880 16882 +2 \n+ Misses 4967 4965 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `91.28% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.20% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=footer). Last update [1affde2...7ffe4b3](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | Just a small thing, but it was forgotten when adding those I guess. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5021/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5021",
"html_url": "https://github.com/huggingface/transformers/pull/5021",
"diff_url": "https://github.com/huggingface/transformers/pull/5021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5021.patch",
"merged_at": 1592250617000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.