url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/2412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2412/comments | https://api.github.com/repos/huggingface/transformers/issues/2412/events | https://github.com/huggingface/transformers/pull/2412 | 545,651,674 | MDExOlB1bGxSZXF1ZXN0MzU5NDgxNTAy | 2,412 | Update Mish activation function to use torchscript JIT | {
"login": "iyaja",
"id": 30197649,
"node_id": "MDQ6VXNlcjMwMTk3NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/30197649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iyaja",
"html_url": "https://github.com/iyaja",
"followers_url": "https://api.github.com/users/iyaja/followers",
"following_url": "https://api.github.com/users/iyaja/following{/other_user}",
"gists_url": "https://api.github.com/users/iyaja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iyaja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iyaja/subscriptions",
"organizations_url": "https://api.github.com/users/iyaja/orgs",
"repos_url": "https://api.github.com/users/iyaja/repos",
"events_url": "https://api.github.com/users/iyaja/events{/privacy}",
"received_events_url": "https://api.github.com/users/iyaja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=h1) Report\n> Merging [#2412](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ffc8eaf53542092271a208a52e881668e753e72?src=pr&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `35.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2412 +/- ##\n==========================================\n- Coverage 73.24% 73.21% -0.04% \n==========================================\n Files 87 87 \n Lines 14989 15002 +13 \n==========================================\n+ Hits 10979 10984 +5 \n- Misses 4010 4018 +8\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2412/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.55% <35.71%> (-1.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=footer). Last update [0ffc8ea...286b55b](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,584 | 1,584 | NONE | null | This PR modifies the implementation of Mish to match that of the [fastai library](https://github.com/fastai/fastai_dev/blob/0f613ba3205990c83de9dba0c8798a9eec5452ce/dev/local/layers.py#L441). A discussion of the benefits of JIT for the Mish function can be found on the [fastai forums](https://forums.fast.ai/t/meet-mish-new-activation-function-possible-successor-to-relu/53299/587). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2412/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2412",
"html_url": "https://github.com/huggingface/transformers/pull/2412",
"diff_url": "https://github.com/huggingface/transformers/pull/2412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2412.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2411/comments | https://api.github.com/repos/huggingface/transformers/issues/2411/events | https://github.com/huggingface/transformers/issues/2411 | 545,573,689 | MDU6SXNzdWU1NDU1NzM2ODk= | 2,411 | What is the difference between T5Model, T5WithLMHeadModel, T5PreTrainedModel? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I notice that for T5 model, there are more choices(T5Model, T5WithLMHeadModel, T5PreTrainedModel) than BERT or GPT. What is the difference between these three? I think all three are pre-trained model. We do not use T5PreTrainedModel in our downstream task code. Besides, the difference between T5Model and T5WithLMHeadModel is that the latter contains one more linear layer at the end. Am I right about these? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2411/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/2411/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2410/comments | https://api.github.com/repos/huggingface/transformers/issues/2410/events | https://github.com/huggingface/transformers/issues/2410 | 545,502,460 | MDU6SXNzdWU1NDU1MDI0NjA= | 2,410 | Typo in XLM moses pipeline. | {
"login": "alvations",
"id": 1050316,
"node_id": "MDQ6VXNlcjEwNTAzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvations",
"html_url": "https://github.com/alvations",
"followers_url": "https://api.github.com/users/alvations/followers",
"following_url": "https://api.github.com/users/alvations/following{/other_user}",
"gists_url": "https://api.github.com/users/alvations/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvations/subscriptions",
"organizations_url": "https://api.github.com/users/alvations/orgs",
"repos_url": "https://api.github.com/users/alvations/repos",
"events_url": "https://api.github.com/users/alvations/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvations/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, thanks @alvations !",
"BTW, https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L621 could also be simplified to the normalizer object from https://github.com/alvations/sacremoses/blob/master/sacremoses/normalize.py#L129\r\n\r\n\r\n```python\r\n def moses_punct_norm(self, text, lang):\r\n if lang not in self.cache_moses_punct_normalizer:\r\n punct_normalizer = sm.MosesPunctNormalizer(lang=lang, \r\n pre_replace_unicode_punct=True, post_remove_control_chars=True)\r\n self.cache_moses_punct_normalizer[lang] = punct_normalizer\r\n else:\r\n punct_normalizer = self.cache_moses_punct_normalizer[lang]\r\n return punct_normalizer.normalize(text)\r\n```\r\n\r\nThen the pipeline at https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L635 would just be \r\n\r\n\r\n```python\r\n def moses_pipeline(self, text, lang):\r\n text = self.moses_punct_norm(text, lang)\r\n return text\r\n```"
] | 1,578 | 1,578 | 1,578 | NONE | null | The replacement on for the unicode punct replacement has a mistake at https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L477 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2410/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2409/comments | https://api.github.com/repos/huggingface/transformers/issues/2409/events | https://github.com/huggingface/transformers/issues/2409 | 545,473,336 | MDU6SXNzdWU1NDU0NzMzMzY= | 2,409 | Error in pipeline() when model left as None | {
"login": "leungi",
"id": 30273868,
"node_id": "MDQ6VXNlcjMwMjczODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/30273868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leungi",
"html_url": "https://github.com/leungi",
"followers_url": "https://api.github.com/users/leungi/followers",
"following_url": "https://api.github.com/users/leungi/following{/other_user}",
"gists_url": "https://api.github.com/users/leungi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leungi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leungi/subscriptions",
"organizations_url": "https://api.github.com/users/leungi/orgs",
"repos_url": "https://api.github.com/users/leungi/repos",
"events_url": "https://api.github.com/users/leungi/events{/privacy}",
"received_events_url": "https://api.github.com/users/leungi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Upgraded to Python 3.6.7, and two of _tasks_ (sentiment-analysis and question-answering) works as expected (i.e., no error without specifying `model` args).\r\n\r\nThe remaining two _tasks_ (ner and feature-extraction) fail on a new (similar) error:\r\n\r\n#### feature-extraction\r\n```py\r\n>>> from transformers import pipeline\r\nTo use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html\r\n>>> nlp = pipeline('feature-extraction')\r\nTraceback (most recent call last):\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\modeling_utils.py\", line 415, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location='cpu')\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\torch\\serialization.py\", line 426, in load\r\n return _load(f, map_location, pickle_module, **pickle_load_args)\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\torch\\serialization.py\", line 620, in _load\r\n deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)\r\nRuntimeError: unexpected EOF, expected 9211648 more bytes. The file might be corrupted.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\pipelines.py\", line 905, in pipeline\r\n model = model_class.from_pretrained(model, config=config, **model_kwargs)\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\modeling_auto.py\", line 238, in from_pretrained\r\n return DistilBertModel.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\modeling_utils.py\", line 417, in from_pretrained\r\n raise OSError(\"Unable to load weights from pytorch checkpoint file. \"\r\nOSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\n```\r\n\r\n#### ner\r\n```py\r\n>>> from transformers import pipeline\r\nTo use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html\r\n>>> nlp = pipeline('ner')\r\nTraceback (most recent call last):\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\modeling_utils.py\", line 415, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location='cpu')\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\torch\\serialization.py\", line 426, in load\r\n return _load(f, map_location, pickle_module, **pickle_load_args)\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\torch\\serialization.py\", line 620, in _load\r\n deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)\r\nRuntimeError: unexpected EOF, expected 3733591 more bytes. The file might be corrupted.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\pipelines.py\", line 905, in pipeline\r\n model = model_class.from_pretrained(model, config=config, **model_kwargs)\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\modeling_auto.py\", line 882, in from_pretrained\r\n return BertForTokenClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n File \"D:\\Continuum\\anaconda3\\envs\\transformers-py36\\lib\\site-packages\\transformers\\modeling_utils.py\", line 417, in from_pretrained\r\n raise OSError(\"Unable to load weights from pytorch checkpoint file. \"\r\nOSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\n```\r\n\r\n#### Troubleshooting attempts\r\n1. Tried specifying `model` args, but Python crashes every time.\r\n2. Tried adding `force_download=True`, but same error as above",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"To close the loop.\r\n\r\nUsing `transformers-2.5.1` solves the issue.\r\n\r\nThanks!"
] | 1,578 | 1,584 | 1,584 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Default models as per `SUPPORTED_TASKS` config in [pipeline.py](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py)
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: (`pipeline.py`)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (`question-answering`, `ner`, `feature-extraction`, `sentiment-analysis`)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Install `transformers` 2.3.0
2. Run [example](https://github.com/huggingface/transformers#quick-tour-of-pipelines)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```py
from transformers import pipeline
>>> nlp = pipeline('question-answering')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Continuum\anaconda3\envs\transformers\lib\site-packages\transformers\pipelines.py", line 860, in pipeline
model = models[framework]
TypeError: string indices must be integers
>>> nlp = pipeline('question-answering', model='distilbert-base-uncased-distilled-squad', tokenizer='distilbert-base-uncased')
```
## Expected behavior
Leaving `model`/`tokenizer` args to `None` should not yield error.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: system='Windows', release='10', version='10.0.17134', machine='AMD64'
* Python version: 3.5.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2409/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2408/comments | https://api.github.com/repos/huggingface/transformers/issues/2408/events | https://github.com/huggingface/transformers/issues/2408 | 545,438,391 | MDU6SXNzdWU1NDU0MzgzOTE= | 2,408 | Can't download models or model config | {
"login": "cnzjhdx",
"id": 19799092,
"node_id": "MDQ6VXNlcjE5Nzk5MDky",
"avatar_url": "https://avatars.githubusercontent.com/u/19799092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cnzjhdx",
"html_url": "https://github.com/cnzjhdx",
"followers_url": "https://api.github.com/users/cnzjhdx/followers",
"following_url": "https://api.github.com/users/cnzjhdx/following{/other_user}",
"gists_url": "https://api.github.com/users/cnzjhdx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cnzjhdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cnzjhdx/subscriptions",
"organizations_url": "https://api.github.com/users/cnzjhdx/orgs",
"repos_url": "https://api.github.com/users/cnzjhdx/repos",
"events_url": "https://api.github.com/users/cnzjhdx/events{/privacy}",
"received_events_url": "https://api.github.com/users/cnzjhdx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I'm not sure I see what exactly is your problem ? Was there something following this message, like an error or a warning ?",
"OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin' to download pretrained weights. \r\nit is disconnect service",
"ok,thanks."
] | 1,578 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
when I run fine-tune examples as run_squad.py,it turns out error like this:
E:\tensorflow_natural_question\transformers\examples>python run_squad.py --model_type bert --model_name_or_path bert-base-cased --do_train --do_eval --do_lower_case --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 2 --learning_rate 3e-5 --num_train_epochs 1.0 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/
2020-01-05 23:58:07.219057: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
01/05/2020 23:58:08 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
01/05/2020 23:58:13 - INFO - filelock - Lock 2170508660632 acquired on C:\Users\Administrator\.cache\torch\transformers\b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6.lock
01/05/2020 23:58:13 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json not found in cache or force_download set to True, downloading to C:\Users\Administrator\.cache\torch\transformers\tmptcqtrh98
win10+pytorch-gpu1.2.0+python3.7.3
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2407/comments | https://api.github.com/repos/huggingface/transformers/issues/2407/events | https://github.com/huggingface/transformers/pull/2407 | 545,424,991 | MDExOlB1bGxSZXF1ZXN0MzU5MzA2NzY2 | 2,407 | [cli] Add support for T5 model conversion | {
"login": "NaxAlpha",
"id": 11090613,
"node_id": "MDQ6VXNlcjExMDkwNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11090613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NaxAlpha",
"html_url": "https://github.com/NaxAlpha",
"followers_url": "https://api.github.com/users/NaxAlpha/followers",
"following_url": "https://api.github.com/users/NaxAlpha/following{/other_user}",
"gists_url": "https://api.github.com/users/NaxAlpha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NaxAlpha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NaxAlpha/subscriptions",
"organizations_url": "https://api.github.com/users/NaxAlpha/orgs",
"repos_url": "https://api.github.com/users/NaxAlpha/repos",
"events_url": "https://api.github.com/users/NaxAlpha/events{/privacy}",
"received_events_url": "https://api.github.com/users/NaxAlpha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=h1) Report\n> Merging [#2407](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/80faf22b4ac194061a08fde09ad8b202118c151e?src=pr&el=desc) will **decrease** coverage by `1.16%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2407 +/- ##\n==========================================\n- Coverage 73.24% 72.08% -1.17% \n==========================================\n Files 87 87 \n Lines 14989 14993 +4 \n==========================================\n- Hits 10979 10808 -171 \n- Misses 4010 4185 +175\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `0% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `54.1% <0%> (-10.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.6% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.86% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `60.65% <0%> (-0.69%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `67.73% <0%> (-0.59%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=footer). Last update [80faf22...064bddf](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,584 | 1,584 | NONE | null | I have added support for converting t5 model from CLI.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2407",
"html_url": "https://github.com/huggingface/transformers/pull/2407",
"diff_url": "https://github.com/huggingface/transformers/pull/2407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2407.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2406/comments | https://api.github.com/repos/huggingface/transformers/issues/2406/events | https://github.com/huggingface/transformers/issues/2406 | 545,421,060 | MDU6SXNzdWU1NDU0MjEwNjA= | 2,406 | BERT's Embedding/Vocab Size in Code is Different from Provided Pretrained Config | {
"login": "LFhase",
"id": 20450765,
"node_id": "MDQ6VXNlcjIwNDUwNzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/20450765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LFhase",
"html_url": "https://github.com/LFhase",
"followers_url": "https://api.github.com/users/LFhase/followers",
"following_url": "https://api.github.com/users/LFhase/following{/other_user}",
"gists_url": "https://api.github.com/users/LFhase/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LFhase/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LFhase/subscriptions",
"organizations_url": "https://api.github.com/users/LFhase/orgs",
"repos_url": "https://api.github.com/users/LFhase/repos",
"events_url": "https://api.github.com/users/LFhase/events{/privacy}",
"received_events_url": "https://api.github.com/users/LFhase/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The default configuration in `configuration_bert.py` is for `bert-base-uncased` model. I am not sure what you are trying to do here will work or not but here is what I would suggest try doing:\r\n\r\nFirst Load configuration manually from `bert-base-case` json. Then change the parameters you want to change and then pass it to `from_pretrained` function",
"> The default configuration in `configuration_bert.py` is for `bert-base-uncased` model. I am not sure what you are trying to do here will work or not but here is what I would suggest try doing:\r\n> \r\n> First Load configuration manually from `bert-base-case` json. Then change the parameters you want to change and then pass it to `from_pretrained` function\r\n\r\nThank you NaxAlpha for your immediate reply!\r\n\r\nMy intention is just simply to get outputs of all hidden layers from a pre-trained BERT and found this 'issue'. Your solution sounds good! \r\n\r\nIn the future, might it be better to load the corresponding config according to the input parameter, i.e., the string like `bert-base-uncased` or `bert-base-cased`, since the weights are also loaded according to this string?",
"Great. I have verified that it is working:\r\n\r\nhttps://colab.research.google.com/drive/1IPgcACm38dIUaj9RqTWw9xbwwywIOpXf",
"As @NaxAlpha says, the default parameters are that of the `bert-base-uncased` model. If you wish to instantiate a `BertConfig` from the `bert-base-cased` model with the `output_hidden_states` flag set to `True`, you would do it as follows:\r\n\r\n```py\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\", output_hidden_states=True)\r\nmodel = BertModel.from_pretrained(\"bert-base-cased\", config=config)\r\n```",
"Thanks, guys.\r\nYour replies solve my question well. ",
"I am on Ubuntu where also reports \r\n```\r\nError(s) in loading state_dict for BertForSequenceClassification:\r\n\tsize mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint, the shape in current model is torch.Size([28996, 768]).\r\n```\r\nI am currently checking solutions above.",
"> I am on Ubuntu where also reports\r\n> \r\n> ```\r\n> Error(s) in loading state_dict for BertForSequenceClassification:\r\n> \tsize mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint, the shape in current model is torch.Size([28996, 768]).\r\n> ```\r\n> \r\n> I am currently checking solutions above.\r\n\r\nHi, I think it may be better to also post the minimal code to reproduce the issue here. ",
"Was facing a similar issue previously as I tried to adapt allenai/scibert_scivocab_cased model. I was previously still using the bert config.json. By making sure the config.json matches the model I am using (in my case was the scibert config), was able to bypass this issue. "
] | 1,578 | 1,648 | 1,578 | NONE | null | ## 🐛 A Subtle Bug
Hi, I really appreciate your work but I found a subtle problem here. Could you take a look of it?
- The model I am using is **BERT**.
- The language I am using the model on is **English**.
- The problem arises when using:
- The task I am working on is to simply initialize a BERT object with my own modifications to config, i.e., `BertConfig` class.
## To Reproduce
Steps to reproduce the behavior:
Simplely run this line:
```
BertModel.from_pretrained("bert-base-cased",config=BertConfig(output_hidden_states=True))
```
Then we have following error message:
`
File "D:\Anaconda3\envs\gnner\lib\site-packages\transformers\modeling_utils.py", line 486, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertModel:
size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([28996, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
`
## Expected behavior
It should be good to run instead of reporting issues like that.
## Possible reason
The issue is because of `line 86` in `configuration_bert.py`, where the vocabulary size is **`30522`**. The default vocabulary size I believe should be consistent with that in the config file, i.e., `https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json`, where it's **`28996`**.
## Environment
* OS: Windows 10
* Python version: 3.7.3
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): latest pip package
* Using GPU ? It's independent of environments like this I believe.
<!-- Add any other context about the problem here. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2406/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2405/comments | https://api.github.com/repos/huggingface/transformers/issues/2405/events | https://github.com/huggingface/transformers/issues/2405 | 545,411,968 | MDU6SXNzdWU1NDU0MTE5Njg= | 2,405 | weird resize during the initialization in the PreTrainedModel | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This layer is resized in `self.init_weights` because it is sharing weights with the embedding layer. They need to be the same size.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,584 | 1,584 | NONE | null | Hi
I am using BertForMaskedLM in the run_lm_finetuning.py code. This module call the module of BertLMPredictionHead, in which there is a decoder layer which is of the size of hidden_size*vocab_size. I would like to change the dimension of this layer. when I change it, I realize that during the call to elf.init_weights() inside the BertForMaskedLM, the function resize the weights for the decoder layer. I cannot track where is this exactly happening, thanks for your advice on how I can resolve this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2405/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2404/comments | https://api.github.com/repos/huggingface/transformers/issues/2404/events | https://github.com/huggingface/transformers/issues/2404 | 545,410,457 | MDU6SXNzdWU1NDU0MTA0NTc= | 2,404 | Pretrained Model not available | {
"login": "punit121",
"id": 13787642,
"node_id": "MDQ6VXNlcjEzNzg3NjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13787642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/punit121",
"html_url": "https://github.com/punit121",
"followers_url": "https://api.github.com/users/punit121/followers",
"following_url": "https://api.github.com/users/punit121/following{/other_user}",
"gists_url": "https://api.github.com/users/punit121/gists{/gist_id}",
"starred_url": "https://api.github.com/users/punit121/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/punit121/subscriptions",
"organizations_url": "https://api.github.com/users/punit121/orgs",
"repos_url": "https://api.github.com/users/punit121/repos",
"events_url": "https://api.github.com/users/punit121/events{/privacy}",
"received_events_url": "https://api.github.com/users/punit121/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Could you describe your issue in more details? e.g. share some code on what you are trying to do and what is not working?",
"I have the same issue.\r\n\r\n\r\n\r\n",
"I download a pretrained model and unzip it to my path. When I load it through BertTokenizer, it cannot be found. Could you please tell me what/how to check? @NaxAlpha ",
"the Error is \r\nValueError : \" Can't find a vocabulary file at path **\\.cache\\**\". \r\n it seems the program try to load file through cache instead of assigned path. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"i got the same error from AWS container training... as @punit121 . does it mean it didn't successfully download the pretrained_model?",
"@tbs17 Nopes, probably this is path error \r\ncan you share screenshot of your code ? \r\n\r\n",
"Got the same error running in a Docker container, while the same code works completely fine locally. \r\n\r\n\r\n```\r\nserver_1 | INFO:pytorch_transformers.tokenization_utils:Didn't find file model/added_tokens.json. We won't load it.\r\nserver_1 | INFO:pytorch_transformers.tokenization_utils:Didn't find file model/special_tokens_map.json. We won't load it.\r\n```\r\nThe pretrained model is saved in the path `/model`. This path setup is the same as how I do it locally/separately. But I can't seem to figure out the issue as to why when I integrate this code into a Docker container, it hits these errors. Furthermore, I have confirmed that I am in the right path and am able to access the `model` subdirectory and that the file `model/bert_config.json` is able to be accessed.\r\n\r\nAny ideas for how to resolve this issue? ",
"@catyeo18, these are not errors, it just indicates that your tokenizer has no additional added tokens or special tokens.\r\n\r\nIf you're using `model.from_pretrained`, please note that the configuration must absolutely be named `config.json`, and not `bert_config.json`.",
"@LysandreJik thank you for your response -- I don't understand why that is the case when my tokenizer works fine when I run my scripts locally, but yield those error messages when I run my scripts in a Docker container. There is no difference in my code or file setup.",
"@catyeo18 I spent some time digging into the code and I think the reason is that in `transformers/file_utils.py`, if the file is not there but you have internet connection to check, the code just let that fail silently, basically it means that the script check in `.cache` as well as try to download but not found so just ignore it. However, when setup in an environment without internet connnection (docker for example), the script cannot find the file in `cache` but also cannot check if the file is available online since there's no internet, thus it throws the error. "
] | 1,578 | 1,627 | 1,584 | NONE | null | ## ❓ Questions & Help
01/05/2020 12:18:00 - INFO - root - finetuned model not available - loading standard pretrained model
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Model name '/opt/ml/code/pretrained_models/bert-base-uncased' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). Assuming '/opt/ml/code/pretrained_models/bert-base-uncased' is a path or url to a directory containing tokenizer files.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Didn't find file /opt/ml/code/pretrained_models/bert-base-uncased/added_tokens.json. We won't load it.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Didn't find file /opt/ml/code/pretrained_models/bert-base-uncased/special_tokens_map.json. We won't load it.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Didn't find file /opt/ml/code/pretrained_models/bert-base-uncased/tokenizer_config.json. We won't load it.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file /opt/ml/code/pretrained_models/bert-base-uncased/vocab.txt
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file None
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file None
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file None | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2404/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2403/comments | https://api.github.com/repos/huggingface/transformers/issues/2403/events | https://github.com/huggingface/transformers/pull/2403 | 545,398,064 | MDExOlB1bGxSZXF1ZXN0MzU5Mjg4MDYw | 2,403 | Add support for Albert and XLMRoberta for the Glue example | {
"login": "simonepri",
"id": 3505087,
"node_id": "MDQ6VXNlcjM1MDUwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3505087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonepri",
"html_url": "https://github.com/simonepri",
"followers_url": "https://api.github.com/users/simonepri/followers",
"following_url": "https://api.github.com/users/simonepri/following{/other_user}",
"gists_url": "https://api.github.com/users/simonepri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonepri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonepri/subscriptions",
"organizations_url": "https://api.github.com/users/simonepri/orgs",
"repos_url": "https://api.github.com/users/simonepri/repos",
"events_url": "https://api.github.com/users/simonepri/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonepri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=h1) Report\n> Merging [#2403](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/80faf22b4ac194061a08fde09ad8b202118c151e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2403 +/- ##\n=======================================\n Coverage 73.24% 73.24% \n=======================================\n Files 87 87 \n Lines 14989 14989 \n=======================================\n Hits 10979 10979 \n Misses 4010 4010\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=footer). Last update [80faf22...ff6dacf](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,578 | 1,578 | 1,578 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2403",
"html_url": "https://github.com/huggingface/transformers/pull/2403",
"diff_url": "https://github.com/huggingface/transformers/pull/2403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2403.patch",
"merged_at": 1578405355000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2402/comments | https://api.github.com/repos/huggingface/transformers/issues/2402/events | https://github.com/huggingface/transformers/issues/2402 | 545,361,440 | MDU6SXNzdWU1NDUzNjE0NDA= | 2,402 | BertForTokenClassification can not from_pretrained the fine-tuned model? | {
"login": "trueto",
"id": 15409619,
"node_id": "MDQ6VXNlcjE1NDA5NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/15409619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trueto",
"html_url": "https://github.com/trueto",
"followers_url": "https://api.github.com/users/trueto/followers",
"following_url": "https://api.github.com/users/trueto/following{/other_user}",
"gists_url": "https://api.github.com/users/trueto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trueto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trueto/subscriptions",
"organizations_url": "https://api.github.com/users/trueto/orgs",
"repos_url": "https://api.github.com/users/trueto/repos",
"events_url": "https://api.github.com/users/trueto/events{/privacy}",
"received_events_url": "https://api.github.com/users/trueto/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, I had this same issue.\r\nIn my case, it was a tokenizer issue. For \r\n`--tokenizer_name` use \"bert-base-multilingual-cased\" or \"bert-base-multilingual-uncased\" solved the problem.",
"> Hi there, I had this same issue.\r\n> In my case, it was a tokenizer issue. For\r\n> `--tokenizer_name` use \"bert-base-multilingual-cased\" or \"bert-base-multilingual-uncased\" solved the problem.\r\n\r\nThanks! I tried it. But it can't work in my case",
"sorry, it's my fault. I read the data in 'latin1' encode, and skiped the whole line when the length of tokens does not equal that of label_ids. Change the csv file as 'utf8' encoding format, then everthing work as excepted!"
] | 1,578 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
Thanks the great work!
While, when I wrap the `run_ner.py `scripts in sklearn api style for non-specialists, I met some problem.
It 's ok for training and evaluating, but when predicting the F1-score is much lower than that of evaluating. As shown in following:
Evaluating result: the F1-score of 0.8242
```
***** Eval results 500 *****
f1 = 0.8101377518505809
loss = 0.10396960769538525
precision = 0.8009887553315238
recall = 0.8194981652286026
***** Eval results 1000 *****
f1 = 0.8242496050552922
loss = 0.09259376796307388
precision = 0.8206035584390052
recall = 0.8279281959734206
```
Predicting Results, the F1-score 0.0934
```
precision recall f1-score support
tim 0.0954 0.0943 0.0949 2014
org 0.0743 0.0688 0.0714 2021
geo 0.1004 0.1087 0.1044 3771
per 0.0843 0.0864 0.0853 1644
gpe 0.1022 0.1010 0.1016 1623
nat 0.3333 0.0769 0.1250 13
art 0.0000 0.0000 0.0000 51
eve 0.0400 0.0476 0.0435 21
micro avg 0.0930 0.0938 0.0934 11158
macro avg 0.0924 0.0938 0.0929 11158
```
**? ?? why? I have check the code many times. Is that the bug in saving or loading the fine-tuned model?**
The fine-tuning and predicting script based on `transformers-sklearn`
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from transformers_sklearn import BERTologyNERClassifer
if __name__ == '__main__':
data_df = pd.read_csv('datasets/gmbner/ner_dataset.csv',encoding="utf8")
data_df.fillna(method="ffill",inplace=True)
value_counts = data_df['Tag'].value_counts()
label_list = list(value_counts.to_dict().keys())
# ## 1. preparing data
X = []
y = []
for label, batch_df in data_df.groupby(by='Sentence #',sort=False):
words = batch_df['Word'].tolist()
labels = batch_df['Tag'].tolist()
assert len(words) == len(labels)
X.append(words)
y.append(labels)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.1,random_state=520)
## 2. customize model
ner = BERTologyNERClassifer(
labels=label_list,
model_type='bert',
model_name_or_path='bert-base-cased',
data_dir='ts_data/gmbner',
output_dir='results/gmbner',
num_train_epochs=3,
learning_rate=5e-5,
logging_steps=500,
save_steps=500,
overwrite_output_dir=True
)
#
## 3. fit
ner.fit(X_train, y_train)
# # # #
## 4. score
report = ner.score(X_test, y_test)
with open('gmbner.txt', 'w', encoding='utf8') as f:
f.write(report)
```
This is the two scripts in [transformers-sklearn](https://github.com/trueto/transformers_sklearn) for NER task.
`token_classification.py`
```python
import os
import torch
import random
import logging
import numpy as np
from tqdm import tqdm, trange
from torch.nn import CrossEntropyLoss
from torch.utils.data import random_split,TensorDataset,\
DistributedSampler,RandomSampler,SequentialSampler,DataLoader
from tensorboardX import SummaryWriter
from transformers_sklearn.utils.token_classification_utils import get_labels,\
read_examples_from_X_y,convert_examples_to_features
from transformers_sklearn.utils.data_utils import to_numpy
from sklearn.base import BaseEstimator,ClassifierMixin
from transformers_sklearn.utils.metrics_utils import f1_score,recall_score,precision_score,classification_report
from transformers import AdamW, get_linear_schedule_with_warmup
from transformers import BertConfig, BertForTokenClassification, BertTokenizer
from transformers import RobertaConfig, RobertaForTokenClassification, RobertaTokenizer
from transformers import DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer
from transformers import CamembertConfig, CamembertForTokenClassification, CamembertTokenizer
# from transformers import AlbertConfig,AlbertTokenizer
from transformers_sklearn.model_albert import AlbertForTokenClassification,AlbertTokenizer,AlbertConfig
ALL_MODELS = sum(
(tuple(conf.pretrained_config_archive_map.keys()) for conf in (BertConfig, RobertaConfig, DistilBertConfig)),
())
MODEL_CLASSES = {
"bert": (BertConfig, BertForTokenClassification, BertTokenizer),
"roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer),
"distilbert": (DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer),
"camembert": (CamembertConfig, CamembertForTokenClassification, CamembertTokenizer),
"albert":(AlbertConfig,AlbertForTokenClassification,AlbertTokenizer)
}
logger = logging.getLogger(__name__)
def set_seed(seed=520,n_gpu=1):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
class BERTologyNERClassifer(BaseEstimator,ClassifierMixin):
def __init__(self,labels,data_dir='ts_data',model_type='bert',
model_name_or_path='bert-base-chinese',
output_dir='ts_results',config_name='',
tokenizer_name='',cache_dir='model_cache',
max_seq_length=512,do_lower_case=False,
per_gpu_train_batch_size=8,per_gpu_eval_batch_size=8,
gradient_accumulation_steps=1,
learning_rate=5e-5,weight_decay=0.0,
adam_epsilon=1e-8,max_grad_norm=1.0,
num_train_epochs=3.0,max_steps=-1,
warmup_steps=0,logging_steps=50,
save_steps=50,evaluate_during_training=True,
no_cuda=False,overwrite_output_dir=False,
overwrite_cache=False,seed=520,
fp16=False,fp16_opt_level='01',
local_rank=-1,val_fraction=0.1):
self.labels = labels
self.data_dir = data_dir
self.model_type = model_type
self.model_name_or_path = model_name_or_path
self.output_dir = output_dir
self.config_name = config_name
self.tokenizer_name = tokenizer_name
self.max_seq_length = max_seq_length
self.do_lower_case = do_lower_case
self.cache_dir = cache_dir
self.per_gpu_train_batch_size = per_gpu_train_batch_size
self.per_gpu_eval_batch_size = per_gpu_eval_batch_size
self.gradient_accumulation_steps = gradient_accumulation_steps
self.learning_rate = learning_rate
self.weight_decay = weight_decay
self.adam_epsilon = adam_epsilon
self.max_grad_norm = max_grad_norm
self.num_train_epochs = num_train_epochs
self.max_steps = max_steps
self.warmup_steps = warmup_steps
self.logging_steps = logging_steps
self.save_steps = save_steps
self.evaluate_during_training = evaluate_during_training
self.no_cuda = no_cuda
self.overwrite_output_dir = overwrite_output_dir
self.overwrite_cache = overwrite_cache
self.seed = seed
self.fp16 = fp16
self.fp16_opt_level = fp16_opt_level
self.local_rank = local_rank
self.val_fraction = val_fraction
self.id2label = {i: label for i, label in enumerate(self.labels)}
self.label_map = {label: i for i, label in enumerate(self.labels)}
# Setup CUDA, GPU & distributed training
if self.local_rank == -1 or self.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not self.no_cuda else "cpu")
self.n_gpu = torch.cuda.device_count() if not self.no_cuda else 1
else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.cuda.set_device(self.local_rank)
device = torch.device("cuda", self.local_rank)
torch.distributed.init_process_group(backend="nccl")
self.n_gpu = 1
self.device = device
# Setup logging
logging.basicConfig(format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if self.local_rank in [-1, 0] else logging.WARN)
logger.warning("Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
self.local_rank, device, self.n_gpu, bool(self.local_rank != -1), self.fp16)
# Set seed
set_seed(seed=self.seed,n_gpu=self.n_gpu)
def fit(self,X,y):
if not os.path.exists(self.data_dir):
os.mkdir(self.data_dir)
if not os.path.exists(self.output_dir):
os.mkdir(self.output_dir)
if os.path.exists(self.output_dir) and os.listdir(
self.output_dir) and not self.overwrite_output_dir:
raise ValueError(
"Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format(
self.output_dir))
num_labels = len(self.labels)
# self.labels = labels
# Use cross entropy ignore index as padding label id so that only real label ids contribute to the loss later
pad_token_label_id = CrossEntropyLoss().ignore_index
self.pad_token_label_id = pad_token_label_id
# Load pretrained model and tokenizer
if self.local_rank not in [-1, 0]:
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
self.model_type = self.model_type.lower()
config_class, model_class, tokenizer_class = MODEL_CLASSES[self.model_type]
config = config_class.from_pretrained(self.config_name if self.config_name else self.model_name_or_path,
num_labels=num_labels,
cache_dir=self.cache_dir if self.cache_dir else None,
share_type='all' if self.model_type=='albert' else None)
tokenizer = tokenizer_class.from_pretrained(
self.tokenizer_name if self.tokenizer_name else self.model_name_or_path,
do_lower_case=self.do_lower_case,
cache_dir=self.cache_dir if self.cache_dir else None)
model = model_class.from_pretrained(self.model_name_or_path,
from_tf=bool(".ckpt" in self.model_name_or_path),
config=config,
cache_dir=self.cache_dir if self.cache_dir else None)
if self.local_rank == 0:
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
model.to(self.device)
logger.info("Training/evaluation parameters %s", self)
train_dataset = load_and_cache_examples(self, tokenizer, pad_token_label_id, X,y, mode="train")
global_step, tr_loss = train(self, train_dataset,model,tokenizer,pad_token_label_id)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
# Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()
if self.local_rank == -1 or torch.distributed.get_rank() == 0:
# Create output directory if needed
if not os.path.exists(self.output_dir) and self.local_rank in [-1, 0]:
os.makedirs(self.output_dir)
logger.info("Saving model checkpoint to %s", self.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model,"module") else model # Take care of distributed/parallel training
model_to_save.save_pretrained(self.output_dir)
tokenizer.save_pretrained(self.output_dir)
# Good practice: save your training arguments together with the trained model
torch.save(self, os.path.join(self.output_dir, "training_args.bin"))
return self
def predict(self,X):
# args = torch.load(os.path.join(self.output_dir, "training_args.bin"))
# Load a trained model and vocabulary that you have fine-tuned
_, model_class, tokenizer_class = MODEL_CLASSES[self.model_type]
model = model_class.from_pretrained(self.output_dir)
tokenizer = tokenizer_class.from_pretrained(self.output_dir)
model.to(self.device)
pad_token_label_id = CrossEntropyLoss().ignore_index
# get dataset
test_dataset = load_and_cache_examples(self,tokenizer,pad_token_label_id,X,y=None,mode='test')
_, preds_list = evaluate(self,test_dataset,model,pad_token_label_id,mode='test')
return preds_list
def score(self, X, y, sample_weight=None):
y_pred = self.predict(X)
return classification_report(y,y_pred,digits=4)
def load_and_cache_examples(args, tokenizer,pad_token_label_id, X,y,mode):
if args.local_rank not in [-1, 0] and mode=='train':
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
# Load data features from cache or dataset file
cached_features_file = os.path.join(args.data_dir, "cached_{}_{}_{}".format(mode,
args.model_type,
str(args.max_seq_length)))
if os.path.exists(cached_features_file) and not args.overwrite_cache and mode=='train':
logger.info("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
logger.info("Creating features from dataset file at %s", args.data_dir)
examples = read_examples_from_X_y(X,y, mode)
features = convert_examples_to_features(examples, args.label_map, args.max_seq_length, tokenizer,
cls_token_at_end=bool(args.model_type in ["xlnet"]),
# xlnet has a cls token at the end
cls_token=tokenizer.cls_token,
cls_token_segment_id=2 if args.model_type in ["xlnet"] else 0,
sep_token=tokenizer.sep_token,
sep_token_extra=bool(args.model_type in ["roberta"]),
# roberta uses an extra separator b/w pairs of sentences, cf. github.com/pytorch/fairseq/commit/1684e166e3da03f5b600dbb7855cb98ddfcd0805
pad_on_left=bool(args.model_type in ["xlnet"]),
# pad on the left for xlnet
pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
pad_token_segment_id=4 if args.model_type in ["xlnet"] else 0,
pad_token_label_id=pad_token_label_id
)
if args.local_rank in [-1, 0] and mode == 'train':
logger.info("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
if args.local_rank == 0 and mode =='train':
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
# Convert to Tensors and build dataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_ids for f in features], dtype=torch.long)
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
return dataset
def train(args, train_dataset, model, tokenizer, pad_token_label_id):
""" Train the model """
if args.local_rank in [-1, 0]:
tb_writer = SummaryWriter()
args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
val_len = int(len(train_dataset)*args.val_fraction)
train_len = len(train_dataset) - val_len
train_ds, val_ds = random_split(train_dataset,[train_len,val_len])
train_sampler = RandomSampler(train_ds) if args.local_rank == -1 else DistributedSampler(train_ds)
train_dataloader = DataLoader(train_ds, sampler=train_sampler, batch_size=args.train_batch_size)
if args.max_steps > 0:
t_total = args.max_steps
args.num_train_epochs = args.max_steps // (len(train_dataloader) // args.gradient_accumulation_steps) + 1
else:
t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay},
{"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total)
# Check if saved optimizer or scheduler states exist
if os.path.isfile(os.path.join(args.model_name_or_path, "optimizer.pt")) and os.path.isfile(
os.path.join(args.model_name_or_path, "scheduler.pt")
):
# Load in optimizer and scheduler states
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
scheduler.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "scheduler.pt")))
if args.fp16:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)
# multi-gpu training (should be after apex fp16 initialization)
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# Distributed training (should be after apex fp16 initialization)
if args.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank],
output_device=args.local_rank,
find_unused_parameters=True)
# Train!
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_ds))
logger.info(" Num Epochs = %d", args.num_train_epochs)
logger.info(" Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size)
logger.info(" Total train batch size (w. parallel, distributed & accumulation) = %d",
args.train_batch_size * args.gradient_accumulation_steps * (
torch.distributed.get_world_size() if args.local_rank != -1 else 1))
logger.info(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
logger.info(" Total optimization steps = %d", t_total)
global_step = 0
epochs_trained = 0
steps_trained_in_current_epoch = 0
# Check if continuing training from a checkpoint
if os.path.exists(args.model_name_or_path):
# set global_step to gobal_step of last saved checkpoint from model path
global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0])
epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)
steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)
logger.info(" Continuing training from checkpoint, will skip to saved global_step")
logger.info(" Continuing training from epoch %d", epochs_trained)
logger.info(" Continuing training from global step %d", global_step)
logger.info(" Will skip the first %d steps in the first epoch", steps_trained_in_current_epoch)
tr_loss, logging_loss = 0.0, 0.0
model.zero_grad()
train_iterator = trange(int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0])
set_seed(seed=args.seed,n_gpu=args.n_gpu) # Added here for reproductibility (even between python 2 and 3)
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])
for step, batch in enumerate(epoch_iterator):
# Skip past any already trained steps if resuming training
if steps_trained_in_current_epoch > 0:
steps_trained_in_current_epoch -= 1
continue
model.train()
batch = tuple(t.to(args.device) for t in batch)
inputs = {"input_ids": batch[0],
"attention_mask": batch[1],
"labels": batch[3]}
if args.model_type != "distilbert":
inputs["token_type_ids"] = batch[2] if args.model_type in ["bert", "xlnet"] else None # XLM and RoBERTa don"t use segment_ids
outputs = model(**inputs)
loss = outputs[0] # model outputs are always tuple in pytorch-transformers (see doc)
if args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
tr_loss += loss.item()
if (step + 1) % args.gradient_accumulation_steps == 0:
if args.fp16:
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
else:
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
scheduler.step() # Update learning rate schedule
model.zero_grad()
global_step += 1
if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0:
# Log metrics
if args.local_rank == -1 and args.evaluate_during_training: # Only evaluate when single GPU otherwise metrics may not average well
results, _ = evaluate(args, val_ds, model,pad_token_label_id,prefix=global_step)
for key, value in results.items():
tb_writer.add_scalar("eval_{}".format(key), value, global_step)
tb_writer.add_scalar("lr", scheduler.get_lr()[0], global_step)
tb_writer.add_scalar("loss", (tr_loss - logging_loss) / args.logging_steps, global_step)
logging_loss = tr_loss
if args.local_rank in [-1, 0] and args.save_steps > 0 and global_step % args.save_steps == 0:
# Save model checkpoint
output_dir = os.path.join(args.output_dir, "checkpoint-{}".format(global_step))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
torch.save(args, os.path.join(output_dir, "training_args.bin"))
logger.info("Saving model checkpoint to %s", output_dir)
torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
logger.info("Saving optimizer and scheduler states to %s", output_dir)
if args.max_steps > 0 and global_step > args.max_steps:
epoch_iterator.close()
break
if args.max_steps > 0 and global_step > args.max_steps:
train_iterator.close()
break
if args.local_rank in [-1, 0]:
tb_writer.close()
return global_step, tr_loss / global_step
def evaluate(args, eval_dataset, model, pad_token_label_id, mode='dev',prefix=0):
# eval_dataset = load_and_cache_examples(args, tokenizer, labels, pad_token_label_id, mode=mode)
args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
# Note that DistributedSampler samples randomly
eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu evaluate
if args.n_gpu > 1 and mode == 'test':
model = torch.nn.DataParallel(model)
# Eval!
if mode == 'dev':
logger.info("***** Running evaluation %s *****", prefix)
else:
logger.info("***** Running predict *****")
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
eval_loss = 0.0
nb_eval_steps = 0
preds = None
out_label_ids = None
model.eval()
for batch in tqdm(eval_dataloader, desc="Evaluating" if mode=='dev' else "Predicting"):
batch = tuple(t.to(args.device) for t in batch)
with torch.no_grad():
inputs = {"input_ids": batch[0],"attention_mask": batch[1],"labels": batch[3]}
if args.model_type != "distilbert":
inputs["token_type_ids"] = batch[2] if args.model_type in ["bert", "xlnet"] else None # XLM and RoBERTa don"t use segment_ids
outputs = model(**inputs)
tmp_eval_loss, logits = outputs[:2]
if args.n_gpu > 1:
tmp_eval_loss = tmp_eval_loss.mean() # mean() to average on multi-gpu parallel evaluating
eval_loss += tmp_eval_loss.item()
nb_eval_steps += 1
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = inputs["labels"].detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, inputs["labels"].detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
preds = np.argmax(preds, axis=2)
# label_map = {i: label for i, label in enumerate(labels)}
out_label_list = [[] for _ in range(out_label_ids.shape[0])]
preds_list = [[] for _ in range(out_label_ids.shape[0])]
for i in range(out_label_ids.shape[0]):
for j in range(out_label_ids.shape[1]):
if out_label_ids[i, j] != pad_token_label_id:
out_label_list[i].append(args.id2label[out_label_ids[i][j]])
preds_list[i].append(args.id2label[preds[i][j]])
results = {
"loss": eval_loss,
"precision": precision_score(out_label_list, preds_list),
"recall": recall_score(out_label_list, preds_list),
"f1": f1_score(out_label_list, preds_list)
}
if mode == 'dev':
output_eval_file = os.path.join(args.output_dir, "eval_results.txt")
with open(output_eval_file, "a") as writer:
logger.info("***** Eval results %d *****",prefix)
writer.write("***** Eval results {} *****".format(prefix))
for key in sorted(results.keys()):
msg = "{} = {}".format(key, str(results[key]))
logger.info(msg)
writer.write(msg)
writer.write('\n')
writer.write('\n')
return results, preds_list
```
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2401/comments | https://api.github.com/repos/huggingface/transformers/issues/2401/events | https://github.com/huggingface/transformers/issues/2401 | 545,286,778 | MDU6SXNzdWU1NDUyODY3Nzg= | 2,401 | Batch size affecting output. | {
"login": "eriher",
"id": 6551733,
"node_id": "MDQ6VXNlcjY1NTE3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6551733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eriher",
"html_url": "https://github.com/eriher",
"followers_url": "https://api.github.com/users/eriher/followers",
"following_url": "https://api.github.com/users/eriher/following{/other_user}",
"gists_url": "https://api.github.com/users/eriher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eriher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eriher/subscriptions",
"organizations_url": "https://api.github.com/users/eriher/orgs",
"repos_url": "https://api.github.com/users/eriher/repos",
"events_url": "https://api.github.com/users/eriher/events{/privacy}",
"received_events_url": "https://api.github.com/users/eriher/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It is possible to get slightly different results. Could you share more details on which evaluation script are you running and for which model/configuration etc?",
"I'm getting having the same issue. But with XLM-R:\r\n\r\nI decided to write a simple script to demonstrate the difference between encoding individually and encoding with a batch:\r\n\r\n```\r\nimport torch\r\nfrom torchnlp.encoders.text import stack_and_pad_tensors\r\nfrom torchnlp.utils import lengths_to_mask\r\nfrom transformers import (BertModel, BertTokenizer, XLMRobertaModel,\r\n XLMRobertaTokenizer)\r\n\r\ntorch.set_printoptions(precision=6)\r\n\r\ndef batch_encoder(samples, tokenizer):\r\n batch = []\r\n for sequence in samples:\r\n batch.append(torch.tensor(tokenizer.encode(sequence)))\r\n return stack_and_pad_tensors(batch, tokenizer.pad_token_id)\r\n\r\nxlm = XLMRobertaModel.from_pretrained(\r\n 'xlm-roberta-base', output_hidden_states=True\r\n )\r\n\r\nbert = BertModel.from_pretrained(\r\n 'bert-base-multilingual-cased', output_hidden_states=True\r\n )\r\n\r\n\r\nxlm.eval()\r\nbert.eval()\r\nwith torch.no_grad():\r\n bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')\r\n xlm_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')\r\n\r\n samples = [\"hello world!\", \"This is a batch and the first sentence will be padded\"]\r\n\r\n bert_tokens, bert_lengths = batch_encoder(samples, bert_tokenizer)\r\n bert_attention_mask = lengths_to_mask(bert_lengths)\r\n\r\n xlm_tokens, xlm_lengths = batch_encoder(samples, bert_tokenizer)\r\n xlm_attention_mask = lengths_to_mask(xlm_lengths)\r\n\r\n # Forward\r\n bert_out = bert(input_ids=bert_tokens, attention_mask=bert_attention_mask)\r\n xlm_out = xlm(input_ids=xlm_tokens, attention_mask=xlm_attention_mask)\r\n bert_last_hidden_states, bert_pooler_output, bert_all_layers = bert_out\r\n xlm_last_hidden_states, xlm_pooler_output, xlm_all_layers = xlm_out\r\n\r\n # Testing by comparing pooler_out\r\n bert_first_sample_tokens = torch.tensor(bert_tokenizer.encode(samples[0])).unsqueeze(0)\r\n xlm_first_sample_tokens = torch.tensor(xlm_tokenizer.encode(samples[0])).unsqueeze(0)\r\n bert_out = bert(input_ids=bert_first_sample_tokens)\r\n xlm_out = xlm(input_ids=xlm_first_sample_tokens)\r\n _, bert_pooler_output_1 , _ = bert_out\r\n _, xlm_pooler_output_1 , _ = xlm_out\r\n\r\n print (bert_pooler_output_1[0][:5])\r\n print (bert_pooler_output[0][:5])\r\n print ()\r\n #assert torch.equal(bert_pooler_output_1[0], bert_pooler_output[0])\r\n\r\n print (xlm_pooler_output_1[0][:5])\r\n print (xlm_pooler_output[0][:5])\r\n\r\n #assert torch.equal(xlm_pooler_output_1[0], xlm_pooler_output[0])```\r\n```\r\n\r\nScript Output:\r\n```\r\ntensor([ 0.264619, 0.191050, 0.120784, -0.024288, -0.186887])\r\ntensor([ 0.264619, 0.191049, 0.120784, -0.024288, -0.186887])\r\n\r\ntensor([-0.114997, -0.025624, -0.171540, 0.725383, 0.318024])\r\ntensor([-0.042580, 0.237069, 0.136827, 0.484221, 0.019779])\r\n```\r\n\r\nFor BERT the results don't change that much... But for XLM-R the results are shockingly different!\r\n\r\nAm I missing something?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale",
"I think I'm getting a similar issue. I'm using DistilBERT in this case, but depending on the batch size, I see different outputs. The differences are slight, but confusing nonetheless. It seems like the difference happens once the batch size goes beyond 3. All batch sizes beyond 3 are identical, but <=3 and >3 are diffierent. My example:\r\n\r\n```import torch\r\nfrom transformers import DistilBertModel, DistilBertTokenizer\r\nMODEL_NAME = 'distilbert-base-uncased'\r\ndistil_model = DistilBertModel.from_pretrained(MODEL_NAME)\r\ndistil_tokenizer = DistilBertTokenizer.from_pretrained(MODEL_NAME)\r\n\r\ndistil_model.eval()\r\ntorch.set_printoptions(precision=6)\r\nsamples = [\"hello world!\", \r\n \"goodbye world!\",\r\n \"hello hello!\",\r\n \"And so on and so on.\",\r\n \"And so on and so forth.\"]\r\ncond_output = {}\r\nfor cond in [2, 3, 5]:\r\n tokens = distil_tokenizer.batch_encode_plus(\r\n samples[:cond],\r\n pad_to_max_length=True, \r\n return_tensors=\"pt\")\r\n tokens.to(device)\r\n outputs = distil_model(**tokens)\r\n \\# just taking the first token of the first sample\r\n cond_output[cond] = outputs[0][:,0][0][:10].cpu().detach().numpy()\r\nprint(cond_output)\r\n```\r\n\r\nOutputs\r\n```\r\n{2: array([-0.18292062, -0.12333887, 0.1573697 , -0.1744302 , -0.25663155,\r\n -0.20508605, 0.31887087, 0.45650607, -0.21000467, -0.14479966],\r\n dtype=float32), 3: array([-0.18292062, -0.12333887, 0.1573697 , -0.1744302 , -0.25663155,\r\n -0.20508605, 0.31887087, 0.45650607, -0.21000467, -0.14479966],\r\n dtype=float32), 5: array([-0.1829206 , -0.12333884, 0.15736982, -0.1744302 , -0.25663146,\r\n -0.20508616, 0.318871 , 0.45650616, -0.21000458, -0.14479981],\r\n dtype=float32)}\r\n```\r\n\r\nAnyone have thoughts here? This causes some confusion when I run an individual sample through the model, as it's not the same as if I run it with 3 other samples.\r\n\r\n",
"> I'm getting having the same issue. But with XLM-R:\r\n> \r\n> I decided to write a simple script to demonstrate the difference between encoding individually and encoding with a batch:\r\n> \r\n> ```\r\n> import torch\r\n> from torchnlp.encoders.text import stack_and_pad_tensors\r\n> from torchnlp.utils import lengths_to_mask\r\n> from transformers import (BertModel, BertTokenizer, XLMRobertaModel,\r\n> XLMRobertaTokenizer)\r\n> \r\n> torch.set_printoptions(precision=6)\r\n> \r\n> def batch_encoder(samples, tokenizer):\r\n> batch = []\r\n> for sequence in samples:\r\n> batch.append(torch.tensor(tokenizer.encode(sequence)))\r\n> return stack_and_pad_tensors(batch, tokenizer.pad_token_id)\r\n> \r\n> xlm = XLMRobertaModel.from_pretrained(\r\n> 'xlm-roberta-base', output_hidden_states=True\r\n> )\r\n> \r\n> bert = BertModel.from_pretrained(\r\n> 'bert-base-multilingual-cased', output_hidden_states=True\r\n> )\r\n> \r\n> \r\n> xlm.eval()\r\n> bert.eval()\r\n> with torch.no_grad():\r\n> bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')\r\n> xlm_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')\r\n> \r\n> samples = [\"hello world!\", \"This is a batch and the first sentence will be padded\"]\r\n> \r\n> bert_tokens, bert_lengths = batch_encoder(samples, bert_tokenizer)\r\n> bert_attention_mask = lengths_to_mask(bert_lengths)\r\n> \r\n> xlm_tokens, xlm_lengths = batch_encoder(samples, bert_tokenizer)\r\n> xlm_attention_mask = lengths_to_mask(xlm_lengths)\r\n> \r\n> # Forward\r\n> bert_out = bert(input_ids=bert_tokens, attention_mask=bert_attention_mask)\r\n> xlm_out = xlm(input_ids=xlm_tokens, attention_mask=xlm_attention_mask)\r\n> bert_last_hidden_states, bert_pooler_output, bert_all_layers = bert_out\r\n> xlm_last_hidden_states, xlm_pooler_output, xlm_all_layers = xlm_out\r\n> \r\n> # Testing by comparing pooler_out\r\n> bert_first_sample_tokens = torch.tensor(bert_tokenizer.encode(samples[0])).unsqueeze(0)\r\n> xlm_first_sample_tokens = torch.tensor(xlm_tokenizer.encode(samples[0])).unsqueeze(0)\r\n> bert_out = bert(input_ids=bert_first_sample_tokens)\r\n> xlm_out = xlm(input_ids=xlm_first_sample_tokens)\r\n> _, bert_pooler_output_1 , _ = bert_out\r\n> _, xlm_pooler_output_1 , _ = xlm_out\r\n> \r\n> print (bert_pooler_output_1[0][:5])\r\n> print (bert_pooler_output[0][:5])\r\n> print ()\r\n> #assert torch.equal(bert_pooler_output_1[0], bert_pooler_output[0])\r\n> \r\n> print (xlm_pooler_output_1[0][:5])\r\n> print (xlm_pooler_output[0][:5])\r\n> \r\n> #assert torch.equal(xlm_pooler_output_1[0], xlm_pooler_output[0])```\r\n> ```\r\n> \r\n> Script Output:\r\n> \r\n> ```\r\n> tensor([ 0.264619, 0.191050, 0.120784, -0.024288, -0.186887])\r\n> tensor([ 0.264619, 0.191049, 0.120784, -0.024288, -0.186887])\r\n> \r\n> tensor([-0.114997, -0.025624, -0.171540, 0.725383, 0.318024])\r\n> tensor([-0.042580, 0.237069, 0.136827, 0.484221, 0.019779])\r\n> ```\r\n> \r\n> For BERT the results don't change that much... But for XLM-R the results are shockingly different!\r\n> \r\n> Am I missing something?\r\n\r\nAlso experienced same issue using BertForPreTraining. This doesn't make sense to me --- there's no component in Bert which depends on the batch size. I mean things like BatchNorm in training mode output different results with changed batch sizes. But no such component is in Bert AFAIK. Anything I missed? \r\nAnother thing I noticed is that if I use FP16, some instances yield quite different embeddings, but some instances have totally identical embeddings (across different batch sizes). If I use FP32, all instances have only slightly different embeddings (but none of them are identical).",
"I'm also facing with this issue. BERT returns different embeddings if I change the batch size. This happens only in the train() mode. Did any one figure out the reason? ",
"same problem over here, any thoughts about it ?\r\n",
"I'm having the same issue with BERT. Slightly differnt outputs, while only changing the batch size. It's driving me crazy, cause I don't understand where's the mistake",
"Not working on BERT, but I see this phenomenon also on a transformer I am working on.\r\nAny news? ",
"Deleted, there is bug 😂",
"Having the same issue with T5 model.",
"I'm seeing similar issues on a fine-tuned distilbert-base-uncased model, sometimes the norm of the difference of tensors can go up to 0.2 which seems huge to me (for Semantic Search applications it means hundreds of items would move around in the ranking depending on the size of the batch used for computing the embeddings).\r\nIs this issue closed ?\r\nPS: I tried using float64 precision but it makes no difference.",
"Having the same issue. Any update?",
"Met same issue. \r\n\r\nAt file transformers/models/roberta/modeling_roberta.py under function RobertaEncoder,\r\n\r\nIf I call \r\n`layer_outputs = layer_module(`\r\n`hidden_states[:2],`\r\n`attention_mask[:2],`\r\n`layer_head_mask,`\r\n`encoder_hidden_states,`\r\n`encoder_attention_mask,`\r\n`past_key_value,`\r\n`output_attentions,)`\r\nand print \r\n`hidden_states = layer_outputs[0]`\r\n`print(hidden_states[0,0,:10])`\r\n\r\nThe results are different from the below version:\r\n`layer_outputs = layer_module(`\r\n`hidden_states,`\r\n`attention_mask,`\r\n`layer_head_mask,`\r\n`encoder_hidden_states,`\r\n`encoder_attention_mask,`\r\n`past_key_value,`\r\n`output_attentions,)`\r\n\r\nI wonder if this is a bug in the huggingface? The only difference between the two versions for me is I change the input batch size. ",
"having the same issue with bart model",
"Hi! @osanseviero this is the bug I mentioned to you at Khipu. I can reproduce the behavior using @bpben's code with transformers 4.27.1 and torch 2.0.0 on a RTX 3090 GPU. At least for me, it results in consistent generations for models such as Flan-T5 XL, albeit I haven't been able to get it to happen with a minimal enough example. Nevertheless, the issue made by @infinitylogesh mentioning this one shows that more people are struggling with it.\r\n\r\nLet me know if I should open a new issue for this.",
"The issue still exists, any solutions @gsarti ?",
"I don't fully understand it yet, but it is not a huggingface issue. It seems like the matrix multiplication of PyTorch (used inside the linear layers) already returns different results for batches in combination with transpose on a Ryzen 5 2500U and on Colab:\r\n\r\n```python\r\nimport torch\r\nx = torch.randn(3,4)\r\ny = torch.randn(5,4).t()\r\ntorch.set_printoptions(precision=10)\r\n\r\n# batch size 1\r\nprint(x[0].matmul(y))\r\n# batch size 4 but only returning the first row\r\nprint(x.matmul(y)[0])\r\n# element-wise comparison batch-size 1 with first row of the result\r\nprint(x[0].matmul(y) == x.matmul(y)[0])\r\n```\r\nOutput:\r\n```\r\ntensor([ 1.4397521019, -1.0296567678, -0.9089178443, 0.3109838367,\r\n 0.2965016961])\r\ntensor([ 1.4397521019, -1.0296568871, -0.9089177847, 0.3109837770,\r\n 0.2965016961])\r\ntensor([ True, False, False, False, True])\r\n```\r\nAll comparisons with batch sizes >1 return identical results on a Ryzen 5 2500U but not on Colab:\r\n```python\r\n# comparing batch size 2 with the first two rows of the result\r\nprint(x[0:2].matmul(y) == x.matmul(y)[0:2])\r\n```\r\n Output:\r\n```\r\ntensor([[True, True, True, True, True],\r\n [True, True, True, True, True]])\r\n```\r\nMaybe I have made a mistake because I only get different results when I also use transpose on the Ryzen 5 2500U (results still differ on Colab):\r\n```python\r\nimport torch\r\nx = torch.randn(3,4)\r\ny = torch.randn(4,5)\r\ntorch.set_printoptions(precision=10)\r\n\r\n# batch size 1\r\nprint(x[0].matmul(y))\r\n# batch size 4 but only returning the first row\r\nprint(x.matmul(y)[0])\r\n# element-wise comparison batch-size 1 with first row of the result\r\nprint(x[0].matmul(y) == x.matmul(y)[0])\r\n```\r\nOutput:\r\n```\r\ntensor([-1.9365643263, -1.9082145691, 4.3417339325, -0.4087761641,\r\n 1.2496384382])\r\ntensor([-1.9365643263, -1.9082145691, 4.3417339325, -0.4087761641,\r\n 1.2496384382])\r\ntensor([True, True, True, True, True])\r\n```\r\nCan someone please check my logic? I don't understand how the transpose can have such an effect. I am afraid that I have made a mistake. \r\n\r\nOtherwise, it looks like the matrix multiplication implementation of the hardware is the root cause for the differences we get. This [paper](https://openreview.net/forum?id=9MDjKb9lGi), even if it didn't pass the review, also seems to point in that direction. It investigated this issue for cuBLAS (cuda matrix multiplication). ",
"I believe I may also be expericing this issue. Changing the contents of a batch, even if the size remains static, will change the resulting embedding for the same text.",
"Also got a difference when used encode , but no difference if use _apply_ method on series.\r\n\r\nimport pandas as pd\r\nsentence_transformer_path=\"distiluse-base-multilingual-cased-v1\"\r\nfrom sentence_transformers import SentenceTransformer\r\nencoder=SentenceTransformer(sentence_transformer_path).encode\r\n\r\ns=[str(i**2) for i in range(10)]\r\ndf=pd.DataFrame()\r\ndf[\"num\"]=s\r\nrow=1\r\n\r\nembed1=encoder(df[\"num\"])\r\nembed1=encoder(df[\"num\"])\r\ndifference=(encoder(df.loc[row,\"num\"])-embed1[row,:])\r\nprint('Method 1 difference: ',(sum(difference**2))**0.5)\r\n\r\nembed2=(df[\"num\"]).apply(encoder)\r\ndifference=(encoder(df.loc[row,\"num\"])-embed2[row])\r\nprint('Method 2 difference: ',(sum(difference**2))**0.5)\r\n\r\n\r\n\r\nMethod 1 difference: 3.997695879885389e-07\r\nMethod 2 difference: 0.0",
"> Having the same issue with T5 model.\r\n\r\nI have the same issue, are there any updates?"
] | 1,578 | 1,707 | 1,584 | NONE | null | ## ❓ Questions & Help
When running evaluation, why am i getting slightly different output when running a batch size of 1 compared to batch size greater than 1?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2401/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2400/comments | https://api.github.com/repos/huggingface/transformers/issues/2400/events | https://github.com/huggingface/transformers/pull/2400 | 545,284,405 | MDExOlB1bGxSZXF1ZXN0MzU5MjEzNDU1 | 2,400 | fix #2399 an ImportError in official example | {
"login": "karajan1001",
"id": 6745454,
"node_id": "MDQ6VXNlcjY3NDU0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6745454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karajan1001",
"html_url": "https://github.com/karajan1001",
"followers_url": "https://api.github.com/users/karajan1001/followers",
"following_url": "https://api.github.com/users/karajan1001/following{/other_user}",
"gists_url": "https://api.github.com/users/karajan1001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karajan1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karajan1001/subscriptions",
"organizations_url": "https://api.github.com/users/karajan1001/orgs",
"repos_url": "https://api.github.com/users/karajan1001/repos",
"events_url": "https://api.github.com/users/karajan1001/events{/privacy}",
"received_events_url": "https://api.github.com/users/karajan1001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=h1) Report\n> Merging [#2400](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/78528742f169fb9481865aa25726ceca5499e036?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2400 +/- ##\n=======================================\n Coverage 73.24% 73.24% \n=======================================\n Files 87 87 \n Lines 14989 14989 \n=======================================\n Hits 10979 10979 \n Misses 4010 4010\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=footer). Last update [7852874...71292b3](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,578 | 1,578 | 1,578 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2400",
"html_url": "https://github.com/huggingface/transformers/pull/2400",
"diff_url": "https://github.com/huggingface/transformers/pull/2400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2400.patch",
"merged_at": 1578246621000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2399/comments | https://api.github.com/repos/huggingface/transformers/issues/2399/events | https://github.com/huggingface/transformers/issues/2399 | 545,280,617 | MDU6SXNzdWU1NDUyODA2MTc= | 2,399 | import Error from official example caused by fastprogress | {
"login": "karajan1001",
"id": 6745454,
"node_id": "MDQ6VXNlcjY3NDU0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6745454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karajan1001",
"html_url": "https://github.com/karajan1001",
"followers_url": "https://api.github.com/users/karajan1001/followers",
"following_url": "https://api.github.com/users/karajan1001/following{/other_user}",
"gists_url": "https://api.github.com/users/karajan1001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karajan1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karajan1001/subscriptions",
"organizations_url": "https://api.github.com/users/karajan1001/orgs",
"repos_url": "https://api.github.com/users/karajan1001/repos",
"events_url": "https://api.github.com/users/karajan1001/events{/privacy}",
"received_events_url": "https://api.github.com/users/karajan1001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting. (and hello @sgugger :)\r\n\r\nI'll merge this to fix the immediate issue, but maybe @jplu can chime in: maybe we don't need the fastprogress dependency here?",
"Closed by #2400 ",
"Oh, I forgot to update the `__init__` with the new version. Will add back the functions there to make compatibility easier. Thanks for the super quick fix!",
"Hello!! Thanks @julien-c for pinging me :)\r\n\r\nIndeed my code was not compatible with the last version of fastprogress, but I thought to have specified the version in the `requirements.txt` file but apparently the way of installing the transformers framework has changed recently.\r\n\r\n@sgugger good job, I like your fix :)\r\n\r\n@julien-c fastprogress if (in my opinion) the most convenient progress bar to use for model training, but I can change if it becomes a problem, as you wish.\r\n",
"Alright let's use `fastprogress` then! We can clean up the conditional import down the line.",
"I change the:\r\n`from fastprogress import master_bar, progress_bar` \r\nto \r\n`from fastprogress.fastprogress import master_bar, progress_bar`\r\nin the ~/fastai/imports/core.py file and it worked"
] | 1,578 | 1,581 | 1,578 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): ALL
The problem arise when using:
* [x] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
V0.2.1 of fastprogress released a couple of days ago seems to cause errors in run_tf_ner.py in the official example.
Traceback (most recent call last):
File "run_tf_ner.py", line 12, in <module>
from fastprogress import master_bar, progress_bar
ImportError: cannot import name 'master_bar' from 'fastprogress' (/usr/local/lib/python3.7/dist-packages/fastprogress/__init__.py)
users need to either downgrade: pip3 install fastprogress==0.1.22
or change the code:
`
from fastprogress.fastprogress import master_bar, progress_bar
`
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2398/comments | https://api.github.com/repos/huggingface/transformers/issues/2398/events | https://github.com/huggingface/transformers/issues/2398 | 545,279,816 | MDU6SXNzdWU1NDUyNzk4MTY= | 2,398 | Distilbert predicting mask | {
"login": "VanOvermeire",
"id": 10529492,
"node_id": "MDQ6VXNlcjEwNTI5NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/10529492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VanOvermeire",
"html_url": "https://github.com/VanOvermeire",
"followers_url": "https://api.github.com/users/VanOvermeire/followers",
"following_url": "https://api.github.com/users/VanOvermeire/following{/other_user}",
"gists_url": "https://api.github.com/users/VanOvermeire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VanOvermeire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VanOvermeire/subscriptions",
"organizations_url": "https://api.github.com/users/VanOvermeire/orgs",
"repos_url": "https://api.github.com/users/VanOvermeire/repos",
"events_url": "https://api.github.com/users/VanOvermeire/events{/privacy}",
"received_events_url": "https://api.github.com/users/VanOvermeire/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In full `bert` case you are using `BertForMaskedLM` but for distill bert you are using `DistilBertModel` which is not for masked language modelling. Try using `DistilBertForMaskedLM`. Check it, it works:\r\n\r\nhttps://colab.research.google.com/drive/1GYt9H9QRUa5clFfAke6KPYl0mi4H1F3H",
"Well, in hindsight that was obvious. :) Thanks!"
] | 1,578 | 1,578 | 1,578 | NONE | null | Hi,
This is probably me doing something wrong, but I can't get distilbert to give me a sensible prediciton when I mask part of a sentence.
This setup for BERT (based on the examples):
```
import logging
logging.basicConfig(level=logging.INFO)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = "Hello how are you doing?"
tokenized_text = tokenizer.tokenize(text)
masked_index = 2
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token)
```
gives the correct answer _are_ for _How are you doing?_.
But when I try the same with distilbert:
```
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
text = "Hello how are you doing?"
tokenized_text = tokenizer.tokenize(text)
masked_index = 2
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
model.eval()
with torch.no_grad():
# not adding/adding the segment tokens. when I give those to the model, it throws an error
last_hidden_states = model(tokens_tensor)
outputs = last_hidden_states[0]
predicted_index = torch.argmax(outputs[0], dim=1)[masked_index].item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token)
```
I practically always get some _unusedxxx_ as a result. At first I thought this was because distilbert is a smaller model, but no matter what I try, I keep getting unused, so I am guessing it's something else.
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2397/comments | https://api.github.com/repos/huggingface/transformers/issues/2397/events | https://github.com/huggingface/transformers/issues/2397 | 545,206,566 | MDU6SXNzdWU1NDUyMDY1NjY= | 2,397 | unable to use distilbert multilingual model | {
"login": "nikhar2008",
"id": 10780359,
"node_id": "MDQ6VXNlcjEwNzgwMzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10780359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhar2008",
"html_url": "https://github.com/nikhar2008",
"followers_url": "https://api.github.com/users/nikhar2008/followers",
"following_url": "https://api.github.com/users/nikhar2008/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhar2008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhar2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhar2008/subscriptions",
"organizations_url": "https://api.github.com/users/nikhar2008/orgs",
"repos_url": "https://api.github.com/users/nikhar2008/repos",
"events_url": "https://api.github.com/users/nikhar2008/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhar2008/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi\r\nI have verified that it is working. Could you please share your environment details etc.\r\n\r\nhttps://colab.research.google.com/drive/1Bo0luU5q7bztalw5-trWsvl7G0J6zE10",
"It seems you're not actually running on transformers 2.3.0. Could you please tell me the output of this code in your environment?\r\n\r\n```py\r\nfrom transformers import AutoModel, AutoTokenizer, __version__\r\n\r\nprint(__version__)\r\nmodel = AutoModel.from_pretrained(\"distilbert-base-multilingual-cased\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-multilingual-cased\")\r\n```\r\n\r\nThank you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
I'm trying to use the distilbert-base-multilingual-cased model but have been unable to do so.
I have checked and I am using transformers version 2.3.0. I have already tried these things:
1) tokenizer = AutoTokenizer.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-config.json")
model = AutoModel.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-pytorch_model.bin")
Gives following error message:
OSError: Model name 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-config.json' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-config.json' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
2) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased")
model = AutoModel.from_pretrained("distilbert-base-multilingual-cased")
Gives following error message: OSError: Model name 'distilbert-base-multilingual-cased' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad). We assumed 'distilbert-base-multilingual-cased' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
3) Same as (2) but with DistilBertTokenizer and DistilBertModel.
Can I please get some help in fixing this issue?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2396/comments | https://api.github.com/repos/huggingface/transformers/issues/2396/events | https://github.com/huggingface/transformers/issues/2396 | 545,200,978 | MDU6SXNzdWU1NDUyMDA5Nzg= | 2,396 | Model2Model quickstart attention_mask dimensionality problem | {
"login": "ykl7",
"id": 4996184,
"node_id": "MDQ6VXNlcjQ5OTYxODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ykl7",
"html_url": "https://github.com/ykl7",
"followers_url": "https://api.github.com/users/ykl7/followers",
"following_url": "https://api.github.com/users/ykl7/following{/other_user}",
"gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykl7/subscriptions",
"organizations_url": "https://api.github.com/users/ykl7/orgs",
"repos_url": "https://api.github.com/users/ykl7/repos",
"events_url": "https://api.github.com/users/ykl7/events{/privacy}",
"received_events_url": "https://api.github.com/users/ykl7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"same issue:\r\n\r\nLinux (ubuntu 18.04.3 LTS)\r\n\r\nPython 3.6.9\r\nTorch Version: 1.3.1\r\n\r\nno GPU - regular DELL box, \r\n\r\ntransformers installed following this part on installation guide (under python3 venv):\r\n...\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .\r\n...\r\n\r\nTraceback (most recent call last):\r\n File \"/home/jimihendrix/projects/transformers/albert/quickstart4_model2model.py\", line 65, in <module>\r\n outputs = model(question_tensor, answer_tensor, decoder_lm_labels=labels_tensor)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py\", line 234, in forward\r\n decoder_outputs = self.decoder(decoder_input_ids, encoder_hidden_states, **kwargs_decoder)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 986, in forward\r\n encoder_attention_mask=encoder_attention_mask,\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 808, in forward\r\n encoder_attention_mask=encoder_extended_attention_mask,\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 422, in forward\r\n hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 383, in forward\r\n self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 329, in forward\r\n hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 253, in forward\r\n attention_scores = attention_scores + attention_mask\r\nRuntimeError: The size of tensor a (8) must match the size of tensor b (768) at non-singleton dimension 3\r\n",
"Indeed, I could reproduce this issue. Thanks for raising it!\r\n\r\nMy attempt at fixing it is [here](https://github.com/huggingface/transformers/pull/2452).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,583 | 1,583 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
BERT-base-uncased
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [X] the official example scripts: (give details)
[model2model tutorial code](https://huggingface.co/transformers/quickstart.html#model2model-example)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Copy the [model2model tutorial code](https://huggingface.co/transformers/quickstart.html#model2model-example) into a new file and run it.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
Traceback (most recent call last):
File "huggingface_m2m_example.py", line 47, in <module>
outputs = model(question_tensor, answer_tensor, decoder_lm_labels=labels_tensor)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_encoder_decoder.py", line 234, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 997, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 819, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 433, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 394, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 334, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 257, in forward
RuntimeError: The size of tensor a (8) must match the size of tensor b (768) at non-singleton dimension 3
```
I printed out the attention masks and the attention scores [right before](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L253) and got the following:
```
question_tensor.shape: torch.Size([1, 7])
answer_tensor.shape: torch.Size([1, 8])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
I am a decoder
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 8, 8])
attention_masks: torch.Size([1, 1, 7, 768])
```
It looks like this is the first time that cross attention is being called. The `is_decoder` flag is being passed as `False` in the tutorial code and we changed it to `True` in the code ourselves. The error is the same irrespective of that change.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The code runs as given in the tutorial.
## Environment
* OS: Linux
* Python version: 3.7.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): Latest master branch
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2395/comments | https://api.github.com/repos/huggingface/transformers/issues/2395/events | https://github.com/huggingface/transformers/issues/2395 | 545,167,295 | MDU6SXNzdWU1NDUxNjcyOTU= | 2,395 | ALBERT pretrained models uses wrong type of GELU activation | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"If you just change your config.json's `hidden_act` value locally you should still be able to load the pretrained weights and be able to convert the model to TFLite, right? ",
"Yes. Another option (my current workaround) is to explicitly specify hidden_act when creating model instance (via .from_pretrained(...)) during fine-tuning stage. ",
"It looks like you have fixed all ALBERT config files. Thanks!)"
] | 1,578 | 1,582 | 1,582 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): ALBERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] my own modified scripts: i'm fine-tuning albert for multilabel-classifcation problem and then converting model into tf-lite format
The tasks I am working on is:
* [x] my own task or dataset: multilable text classification on SemEval 2018 task 1:E-c
## To Reproduce
Steps to reproduce the behavior:
1. Open any link to pretrained configuration at [transformers/configuration_albert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_albert.py). For example: [albert-base-v2-config.json](https://s3.amazonaws.com/models.huggingface.co/bert/albert-base-v2-config.json). But problem applies to all the pretrained configs.
2. Check the value of property "hidden_act" (it will be gelu)
3. Realize, that gelu stands for bert-like implementation(see code [here](https://github.com/huggingface/transformers/blob/1ae63e4e097fe26c900783dd5c1710cf562e222e/src/transformers/modeling_bert.py#L152)), while original [code](https://github.com/google-research/ALBERT/blob/e350db671ae96b8345cd2c0ee1306713642b9810/modeling.py#L296) uses OpenAI-GPT - like gelu (it is defined in transformers as "gelu_new")
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
All the configuration files should contain "gelu_new" under "hidden_act" key.
## Environment
* OS: doesn't matter
* Python version: 3.7
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): master
* Using GPU: doesn't matter
* Distributed of parallel setup: doesn't matter
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
While it possibly doesn't significantly change the model performance, it makes conferting model into tf-lite format a lot more tricky, because BERT-like gelu implementation uses tf.math.erf function, which is not in a tflite-builtins set. So i have to use tflite-select ops, which leads to using more heavy tflite-interpreter on android side. Also, it makes impossible to check the converted model performance on python side because default tensorflow-lite python interpreter can't interpret select-ops models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2395/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2394/comments | https://api.github.com/repos/huggingface/transformers/issues/2394/events | https://github.com/huggingface/transformers/issues/2394 | 545,059,580 | MDU6SXNzdWU1NDUwNTk1ODA= | 2,394 | Pretrained model installation issue | {
"login": "pranjalsharma26",
"id": 48346579,
"node_id": "MDQ6VXNlcjQ4MzQ2NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/48346579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranjalsharma26",
"html_url": "https://github.com/pranjalsharma26",
"followers_url": "https://api.github.com/users/pranjalsharma26/followers",
"following_url": "https://api.github.com/users/pranjalsharma26/following{/other_user}",
"gists_url": "https://api.github.com/users/pranjalsharma26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranjalsharma26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranjalsharma26/subscriptions",
"organizations_url": "https://api.github.com/users/pranjalsharma26/orgs",
"repos_url": "https://api.github.com/users/pranjalsharma26/repos",
"events_url": "https://api.github.com/users/pranjalsharma26/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranjalsharma26/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The file on S3 seems accessible right now. Did you try to reach it directly from your machine and from console to check you have no network issue?",
"No, my network is having no issues I verified it, and from console it is not accessible. The required pretrained model is to be installed from command only as per code. The same issue still exists.\r\n",
"Please, could you try calling python directly from console to check we have the same behavior?\r\n\r\n```python\r\nfrom transformers.modeling_auto import AutoModel\r\n\r\nAutoModel.from_pretrained(\"bert-base-uncased\")\r\n```\r\n\r\nif it doesn't fail and shows the model (as it happens in my console), it means you have access to the S3 and the issue is somewhere else.",
"In the same py file I need to write this lines ?\r\nYes, I'm calling python from console from initial phase.\r\n",
"no need of a py file, just in a python3 console, you can type those commands directly (as long as you have pip installed transformers)... if you haven't a recent version with `AutoModel` available, use `BertModel` instead.",
"Even after doing so as you told the same issue exist, from both AutoModel and BertModel.\r\n\r\n",
"let's look at what I have on my console:\r\n```python\r\n>>> BertModel.from_pretrained(\"bert-base-uncased\", force_download=True)\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 313/313 [00:00<00:00, 29.5kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 440M/440M [00:23<00:00, 18.8MB/s]\r\nBertModel(\r\n (embeddings): BertEmbeddings(\r\n (word_embeddings): Embedding(30522, 768, padding_idx=0)\r\n (position_embeddings): Embedding(512, 768)\r\n (token_type_embeddings): Embedding(2, 768)\r\n (LayerNorm): LayerNorm(torch.Size([768]), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (encoder): BertEncoder(\r\n...\r\n```\r\n\r\nSo it finds the files and model from my network.\r\nSo that's why I wonder whether you haven't something in your network that prevents you from reaching S3.",
"On my console I didn't knew how it is not working I tried the same as you told to do so. I think I should try some another way to sort this issue. On your note you are correct but in my machine it doesn't works. Thanks buddy!",
"I saw that you were working on Windows, right? Are you sure you have no firewall/antivirus software blocking access or locking files or anything else? Just giving some ideas ;)\r\nAnyway, you're welcome!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,583 | 1,583 | NONE | null | I run the script for this repo "https://github.com/alexa/wqa_tanda" in which i need to run the run_glue.py file from Transformer Model, while running that script it gives an error-
Couldn't reach server at "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json to download pretrained model configuration file" as shown -

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2394/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2393/comments | https://api.github.com/repos/huggingface/transformers/issues/2393/events | https://github.com/huggingface/transformers/issues/2393 | 545,058,125 | MDU6SXNzdWU1NDUwNTgxMjU= | 2,393 | run_squad_w_distillation update | {
"login": "simonepreite",
"id": 11095682,
"node_id": "MDQ6VXNlcjExMDk1Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/11095682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonepreite",
"html_url": "https://github.com/simonepreite",
"followers_url": "https://api.github.com/users/simonepreite/followers",
"following_url": "https://api.github.com/users/simonepreite/following{/other_user}",
"gists_url": "https://api.github.com/users/simonepreite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonepreite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonepreite/subscriptions",
"organizations_url": "https://api.github.com/users/simonepreite/orgs",
"repos_url": "https://api.github.com/users/simonepreite/repos",
"events_url": "https://api.github.com/users/simonepreite/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonepreite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,584 | 1,584 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): DistilBERT and BERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: run_squad_w_distillation.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQuAD
## To Reproduce
Steps to reproduce the behavior:
1. Fine tuning on question answering from BERT to DistilBERT
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
The files utils_squad and utils_squad_evaluate are missing right now, so to make this script working I had to restore the files from previous version of this repo. What I expected is to use the script without using tricks
## Environment
* OS: Ubuntu 16.04
* Python version: 3.5.6
* PyTorch version: 1.3.1
* Using GPU nvidia P100 (2x) or 1080Ti
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2392/comments | https://api.github.com/repos/huggingface/transformers/issues/2392/events | https://github.com/huggingface/transformers/issues/2392 | 545,025,679 | MDU6SXNzdWU1NDUwMjU2Nzk= | 2,392 | Unable to download community models | {
"login": "cbowdon",
"id": 1069832,
"node_id": "MDQ6VXNlcjEwNjk4MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1069832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbowdon",
"html_url": "https://github.com/cbowdon",
"followers_url": "https://api.github.com/users/cbowdon/followers",
"following_url": "https://api.github.com/users/cbowdon/following{/other_user}",
"gists_url": "https://api.github.com/users/cbowdon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbowdon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbowdon/subscriptions",
"organizations_url": "https://api.github.com/users/cbowdon/orgs",
"repos_url": "https://api.github.com/users/cbowdon/repos",
"events_url": "https://api.github.com/users/cbowdon/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbowdon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"I confirm what you see... in current master code, `bert-large-cased-finetuned-conll03-english` has no mapping in tokenizers or models so it can't find it in the same way as `bert-base-uncased` for example.\r\n\r\nbut it works if you target it directly:\r\n\r\n```python\r\nAutoTokenizer.from_pretrained(\"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english-config.json\")\r\n\r\nAutoModel.from_pretrained(\"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english-pytorch_model.bin\")\r\n```",
"Hmm, I think I see the issue. @stefan-it @mfuntowicz we could either:\r\n- move `bert-large-cased-finetuned-conll03-english` to `dbmdz/bert-large-cased-finetuned-conll03-english`\r\n- or add shortcut model names inside the codebase (config, model, tokenizer)\r\n\r\nWhat do you think?\r\n\r\n(also kinda related to #2281)",
"@julien-c I think it would be better to move the model under the `dbmdz` namespace - as it is no \"official\" model!",
"@julien-c moving to *dbmdz* is fine. We need to update the default NER pipeline's model provider to reflect the new path. ",
"Model now lives at https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english\r\n\r\nLet me know if everything works correctly!",
"Works perfectly now, thanks!"
] | 1,578 | 1,579 | 1,579 | NONE | null | ## 🐛 Bug
Model I am using (Bert, XLNet....): `bert-base-cased-finetuned-conll03-english`
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: running a small snippet from docs (see below)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: just trying to load the model at this stage
## To Reproduce
Steps to reproduce the behavior:
I'm following the instructions at https://huggingface.co/bert-large-cased-finetuned-conll03-english but failing at the first hurdle. This is the snippet from the docs that I've run:
```python
tokenizer = AutoTokenizer.from_pretrained("bert-large-cased-finetuned-conll03-english")
model = AutoModel.from_pretrained("bert-large-cased-finetuned-conll03-english")
```
It fails with this message:
```
OSError: Model name 'bert-base-cased-finetuned-conll03-english' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english/config.json' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
```
The message mentions looking at https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english/config.json and finding nothing.
I also tried with the CLI: `transformers-cli download bert-base-cased-finetuned-conll03-english` but I'm afraid that failed with a similar message. However both methods work for the namespaced models, e.g. `dbmdz/bert-base-italian-cased`.
## Expected behavior
The community model should download. :)
## Environment
* OS: openSUSE Tumbleweed 20200101
* Python version: 3.7
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? n/a
* Distributed of parallel setup ? n/a
* Any other relevant information:
## Additional context
I browsed https://s3.amazonaws.com/models.huggingface.co/ and see that the model is there, but paths are like:
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english-config.json
rather than:
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english/config.json
(note `-config.json` vs `/config.json`)
If I download the files manually and rename, the model loads. So it looks like just a naming problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2392/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2391/comments | https://api.github.com/repos/huggingface/transformers/issues/2391/events | https://github.com/huggingface/transformers/issues/2391 | 544,813,178 | MDU6SXNzdWU1NDQ4MTMxNzg= | 2,391 | What dataset was used for the NER results reported in the docs for bert/roberta-large-cased and distilbert-base-uncased models? | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,583 | 1,583 | CONTRIBUTOR | null | ## ❓ Questions & Help
Regarding [this section in the docs](https://huggingface.co/transformers/examples.html#comparing-bert-large-cased-roberta-large-cased-and-distilbert-base-uncased) and the NER results using bert-large-cased, roberta-large-cased, and distillbert-base-uncased ...
**What dataset was used?**
When I try them with the GermanEval2014 dataset, the results are inferior to that of the multi-lingual models (which makes sense) ... so my question, upon what dataset(s) were these models trained on that go the most excellent F scores reported in the docs? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2391/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2390/comments | https://api.github.com/repos/huggingface/transformers/issues/2390/events | https://github.com/huggingface/transformers/issues/2390 | 544,598,208 | MDU6SXNzdWU1NDQ1OTgyMDg= | 2,390 | Pipelines support | {
"login": "AlexanderKUA",
"id": 4736996,
"node_id": "MDQ6VXNlcjQ3MzY5OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4736996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexanderKUA",
"html_url": "https://github.com/AlexanderKUA",
"followers_url": "https://api.github.com/users/AlexanderKUA/followers",
"following_url": "https://api.github.com/users/AlexanderKUA/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexanderKUA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexanderKUA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexanderKUA/subscriptions",
"organizations_url": "https://api.github.com/users/AlexanderKUA/orgs",
"repos_url": "https://api.github.com/users/AlexanderKUA/repos",
"events_url": "https://api.github.com/users/AlexanderKUA/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexanderKUA/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Hi @AlexanderKUA, thanks for reporting this issue.\r\n\r\nCan you give a try to the following commit 088daf78d45bed144fe2af84b538f573573bd01d and let us know if it solves your issue ?\r\n\r\n```python\r\nfrom transformers import pipeline\r\nnlp = pipeline('feature-extraction', model='distilbert-base-uncased', device=0)\r\nprint(nlp(['cybersecurity', 'cyber security', 'agriculture', 'data']))\r\n```\r\n\r\nThanks, \r\nMorgan",
"Hi @mfuntowicz \r\nI checked your commit. Yes, it solves the issue. Thanks a lot."
] | 1,577 | 1,578 | 1,578 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
I'm using roberta-base model for feature extraction through pipeline functionality.
Language I am using the model on English texts.
The problem arise when using:
* [x ] my own modified scripts: (give details)
```
from transformers import pipeline
import torch
#torch.set_default_tensor_type('torch.cuda.FloatTensor')
nlp = pipeline('feature-extraction', model='roberta-base', tokenizer='roberta-base', device=0)
def encode(input):
with nlp.device_placement():
return np.array(nlp(input)).mean(axis=1)
results = encode(['cybersecurity', 'cyber security', 'agriculture', 'data'])
```
## To Reproduce
Steps to reproduce the behavior:
1. Just run code above.
Error details
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-f0b27f7cd838> in <module>
----> 1 encode(['cybersecurity', 'cyber security', 'agriculture', 'data']).shape
<ipython-input-5-a0628a1cb908> in encode(input)
12 def encode(input):
13 with nlp.device_placement():
---> 14 return np.array(nlp(input)).mean(axis=1)
~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
442
443 def __call__(self, *args, **kwargs):
--> 444 return super().__call__(*args, **kwargs).tolist()
445
446
~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)
402 # Filter out features not available on specific models
403 inputs = self.inputs_for_model(inputs)
--> 404 return self._forward(inputs)
405
406 def _forward(self, inputs):
~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in _forward(self, inputs)
417 else:
418 with torch.no_grad():
--> 419 predictions = self.model(**inputs)[0].cpu()
420
421 return predictions.numpy()
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
733 head_mask = [None] * self.config.num_hidden_layers
734
--> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
736 encoder_outputs = self.encoder(embedding_output,
737 attention_mask=extended_attention_mask,
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
68 token_type_ids=token_type_ids,
69 position_ids=position_ids,
---> 70 inputs_embeds=inputs_embeds)
71
72
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
184
185 if inputs_embeds is None:
--> 186 inputs_embeds = self.word_embeddings(input_ids)
187 position_embeddings = self.position_embeddings(position_ids)
188 token_type_embeddings = self.token_type_embeddings(token_type_ids)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
## Expected behavior
Sentences encoded properly.
## Environment
* OS: Ubuntu 18.01
* Python version: Python 3.6.5 :: Anaconda, Inc.
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0 and master
* Using GPU ? yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
`torch.set_default_tensor_type('torch.cuda.FloatTensor')`
Uncommenting such line solves issue partially. Issue with CUDA tensor disappears but those sentences could not be encoded properly
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-f0b27f7cd838> in <module>
----> 1 encode(['cybersecurity', 'cyber security', 'agriculture', 'data']).shape
<ipython-input-9-138f4526e218> in encode(input)
12 def encode(input):
13 with nlp.device_placement():
---> 14 return np.array(nlp(input)).mean(axis=1)
~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
442
443 def __call__(self, *args, **kwargs):
--> 444 return super().__call__(*args, **kwargs).tolist()
445
446
~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)
402 # Filter out features not available on specific models
403 inputs = self.inputs_for_model(inputs)
--> 404 return self._forward(inputs)
405
406 def _forward(self, inputs):
~/anaconda3/lib/python3.6/site-packages/transformers/pipelines.py in _forward(self, inputs)
417 else:
418 with torch.no_grad():
--> 419 predictions = self.model(**inputs)[0].cpu()
420
421 return predictions.numpy()
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
733 head_mask = [None] * self.config.num_hidden_layers
734
--> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
736 encoder_outputs = self.encoder(embedding_output,
737 attention_mask=extended_attention_mask,
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
68 token_type_ids=token_type_ids,
69 position_ids=position_ids,
---> 70 inputs_embeds=inputs_embeds)
71
72
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
188 token_type_embeddings = self.token_type_embeddings(token_type_ids)
189
--> 190 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
191 embeddings = self.LayerNorm(embeddings)
192 embeddings = self.dropout(embeddings)
RuntimeError: CUDA error: device-side assert triggered
```
even with
CUDA_LAUNCH_BLOCKING=1
If we try to encode sentence by sentence everything works.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2390/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2389/comments | https://api.github.com/repos/huggingface/transformers/issues/2389/events | https://github.com/huggingface/transformers/pull/2389 | 544,582,032 | MDExOlB1bGxSZXF1ZXN0MzU4NjYxNTI3 | 2,389 | update the config.is_decoder=True before initialize the decoder | {
"login": "zlinao",
"id": 33000929,
"node_id": "MDQ6VXNlcjMzMDAwOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/33000929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zlinao",
"html_url": "https://github.com/zlinao",
"followers_url": "https://api.github.com/users/zlinao/followers",
"following_url": "https://api.github.com/users/zlinao/following{/other_user}",
"gists_url": "https://api.github.com/users/zlinao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zlinao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zlinao/subscriptions",
"organizations_url": "https://api.github.com/users/zlinao/orgs",
"repos_url": "https://api.github.com/users/zlinao/repos",
"events_url": "https://api.github.com/users/zlinao/events{/privacy}",
"received_events_url": "https://api.github.com/users/zlinao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=h1) Report\n> Merging [#2389](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9261c7f771fccfa2a2cb78ae544adef2f6eb402b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2389 +/- ##\n=======================================\n Coverage 73.24% 73.24% \n=======================================\n Files 87 87 \n Lines 15001 15001 \n=======================================\n Hits 10988 10988 \n Misses 4013 4013\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=footer). Last update [9261c7f...9261c7f](https://codecov.io/gh/huggingface/transformers/pull/2389?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"But these are not decoders, they're encoders with an additional language modeling head?",
"> But these are not decoders, they're encoders with an additional language modeling head?\r\n\r\nOh, thanks to point out my mistake, I should actually modify the `modeling_encoder_decoder.py` file. I accidentally closed this pull request and made a new one #2435 . "
] | 1,577 | 1,578 | 1,578 | NONE | null | Currently the `PreTrainedEncoderDecoder` class fails to initialize the "cross-attention layer" since it updates `decoder.config.is_decoder = True` after decoder initialization. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2389",
"html_url": "https://github.com/huggingface/transformers/pull/2389",
"diff_url": "https://github.com/huggingface/transformers/pull/2389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2389.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2388/comments | https://api.github.com/repos/huggingface/transformers/issues/2388/events | https://github.com/huggingface/transformers/issues/2388 | 544,568,124 | MDU6SXNzdWU1NDQ1NjgxMjQ= | 2,388 | Can't load finetuned model properly. | {
"login": "elixium",
"id": 7610370,
"node_id": "MDQ6VXNlcjc2MTAzNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7610370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elixium",
"html_url": "https://github.com/elixium",
"followers_url": "https://api.github.com/users/elixium/followers",
"following_url": "https://api.github.com/users/elixium/following{/other_user}",
"gists_url": "https://api.github.com/users/elixium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elixium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elixium/subscriptions",
"organizations_url": "https://api.github.com/users/elixium/orgs",
"repos_url": "https://api.github.com/users/elixium/repos",
"events_url": "https://api.github.com/users/elixium/events{/privacy}",
"received_events_url": "https://api.github.com/users/elixium/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, I met the same problem as you mentioned! Do you fix it? \r\nmy question is here, https://github.com/huggingface/transformers/issues/2402",
"No, not yet :/ @trueto \r\nI think it saves somehow wrong model but i am not sure. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | I am making a model for joint bert. After trained my model, i try to eval before saving and it gives with %95 accuracy. But the problem is when i save this trained model and load it, i get the awful result. I hope you can help me about finding why i cant load properly.
Here is some part of my code
```
class JointBertClassification(BertPreTrainedModel):
def __init__(self, model_name, config, num_intent_labels, num_slot_labels, args):
super(JointBertClassification, self).__init__(config)
self.num_intent_labels = num_intent_labels
self.num_slot_labels = num_slot_labels
dropout_rate = args[
"dropout_rate"
]
self.bert = BertModel.from_pretrained(
model_name, config=self.bert_config
) # Load pretrained bert
self.intent_classifier = IntentClassifier(
config.hidden_size, num_intent_labels, dropout_rate
)
self.slot_classifier = SlotClassifier(
config.hidden_size, num_slot_labels, dropout_rate
)
# self.init_weights()
def forward(
self,
input_ids,
attention_mask,
token_type_ids,
intent_label_ids,
slot_label_ids,
):
...
```
```
class JointModel:
def __init__(
self, model_type, model_name, intents=None, slots=None, args=None, use_cuda=None
):
"""
Initializes a Joint Model
Args:
model_type: The type of model
model_name: Default Transformer model name or path to directory containing Transformer model file
intents (optional): A list of all Intent labels. If not given ATIS intents will set as default.
slots (optional): A list of all Slot labels. If not given ATIS slots will set as default.
args (optional): Default args will be used if thi parameter is not provided. If provided, it should be a dict containing the args that should be changed in the default args.
use_cuda (optional): Use GPU if available. Setting to False will force model to use CPU only.
"""
MODEL_CLASSES = {"bert": (BertConfig, JointBertClassification, BertTokenizer)}
self.config_class, self.model_class, tokenizer_class = MODEL_CLASSES[model_type]
if intents:
self.intent_labels = intents
else:
self.intent_labels = pd.read_csv(
"jointbert/data/atis/vocab.intent", header=None, index_col=0
).index.tolist()
self.num_intents = len(self.intent_labels)
if slots:
self.slot_labels = slots
else:
self.slot_labels = pd.read_csv(
"jointbert/data/atis/vocab.slot", header=None, index_col=0
).index.tolist()
self.num_slots = len(self.slot_labels)
self.tokenizer = tokenizer_class.from_pretrained(model_name)
self.bert_config = self.config_class.from_pretrained(model_name)
self.model = self.model_class(
model_name,
self.bert_config,
num_slot_labels=self.num_slots,
num_intent_labels=self.num_intents,
args={"dropout_rate": 0.2},
)
if use_cuda:
if torch.cuda.is_available():
self.device = torch.device("cuda")
else:
raise ValueError(
"'use_cuda' set to True when cuda is unavaiable. Make sure CUDA is avaiable or set use_cuda=False"
)
else:
self.device = "cpu"
self.results = {}
self.args = {
"output_dir": "outputs/",
"cache_dir": "cache_dir/",
"fp16": False,
"fp16_opt_level": "O1",
"max_seq_length": 128,
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"eval_batch_size": 8,
"num_train_epochs": 1,
"weight_decay": 0,
"learning_rate": 4e-5,
"adam_epsilon": 1e-8,
"warmup_ratio": 0.06,
"warmup_steps": 0,
"max_grad_norm": 1.0,
"logging_steps": 50,
"save_steps": 2000,
"evaluate_during_training": False,
"overwrite_output_dir": False,
"reprocess_input_data": False,
"process_count": 1,
"n_gpu": 1,
"silent": False,
}
if args:
self.args.update(args)
self.args["model_name"] = model_name
self.args["model_type"] = model_type
self.pad_token_label_id = CrossEntropyLoss().ignore_index
```
The Saving Part after training
```
def train_model(
self,
train_data,
output_dir=None,
show_running_loss=True,
args=None,
eval_df=None,
):
if args:
self.args.update(args)
if self.args["silent"]:
show_running_loss = False
if not output_dir:
output_dir = self.args["output_dir"]
if (
os.path.exists(output_dir)
and os.listdir(output_dir)
and not self.args["overwrite_output_dir"]
):
raise ValueError("--")
self._move_model_to_device()
train_dataset = self.load_and_cache_examples(train_data)
global_set, tr_loss = self.train(
train_dataset,
output_dir,
show_running_loss=show_running_loss,
eval_df=eval_df,
)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_to_save = (
self.model.module if hasattr(self.model, "module") else self.model
)
model_to_save.save_pretrained(output_dir)
self.tokenizer.save_pretrained(output_dir)
torch.save(self.args, os.path.join(output_dir, "training_args.bin"))
print(
"Training of {} model complete. Saved to {}. Training Loss : {}".format(
self.args["model_type"], output_dir, tr_loss
)
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2388/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2387/comments | https://api.github.com/repos/huggingface/transformers/issues/2387/events | https://github.com/huggingface/transformers/issues/2387 | 544,556,827 | MDU6SXNzdWU1NDQ1NTY4Mjc= | 2,387 | Pre-trained model returns different outputs(random outputs) | {
"login": "houdaM97",
"id": 43147098,
"node_id": "MDQ6VXNlcjQzMTQ3MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43147098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/houdaM97",
"html_url": "https://github.com/houdaM97",
"followers_url": "https://api.github.com/users/houdaM97/followers",
"following_url": "https://api.github.com/users/houdaM97/following{/other_user}",
"gists_url": "https://api.github.com/users/houdaM97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/houdaM97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/houdaM97/subscriptions",
"organizations_url": "https://api.github.com/users/houdaM97/orgs",
"repos_url": "https://api.github.com/users/houdaM97/repos",
"events_url": "https://api.github.com/users/houdaM97/events{/privacy}",
"received_events_url": "https://api.github.com/users/houdaM97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I found too. \"last_hidden_states\" was not fixed when I reload pretrain model. I think we miss something. My question is here <https://github.com/huggingface/transformers/issues/2386>, maybe help you.",
"Hi @houdaM97, this is due to the fact that the pretrained archive `xlnet-base-cased` does not contain keys for the question answering head, only for the base transformer model. This means that the question answering head will be randomly initialized and will output different results at each run.\r\n\r\nIn order to see which keys are missing, you can set the flag `output_loading_info` to `True` in the `from_pretrained` method:\r\n\r\n```py\r\nmodel, loading_info = TFXLNetForQuestionAnsweringSimple.from_pretrained(\"xlnet-base-cased\", output_loading_info=True)\r\nprint(\"Loading info\", loading_info)\r\n\r\n# Loading info {'missing_keys': ['qa_outputs'], 'unexpected_keys': ['lm_loss'], 'error_msgs': []}\r\n```\r\n\r\nHere you can see that the `qa_outputs` value is missing and that the `lm_loss` value was present in the checkpoint but not needed for that specific model. In order to use this model for question answering you would first need to fine-tune this `qa_outputs` layers to a question answering task like SQuAD (you can use the [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) script for this).\r\n\r\nWe have a few models which are already fine-tuned on SQuAD, the list is available [here](https://huggingface.co/transformers/pretrained_models.html) (look for squad). You can also use some community fine-tuned models, which are visible [here](https://huggingface.co/models).",
"Hi @LysandreJik , does the tensorflow version of run_squad.py exist?",
"Hi @houdaM97, not yet but I'm actively working on it, alongside other projects. I'm aiming at next week for the first working version.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello,
I had recently play around Huggingface library, i wrote a simple script for question answering task. and for that, i used TFXLNetForQuestionAnsweringSimple model (pre-trained model), but i had different outputs for the same inputs and model each time i run the program.
Did i miss something?
here is my script:
context = "Jim Henson was a puppeteer"
question = "Who was Jim Henson ?"
#XLNET
tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
model = TFXLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased")
en_plus = tokenizer.encode_plus(context, question, add_special_tokens=True)
en = en_plus['input_ids']
token_type_ids = en_plus['token_type_ids']
input_ids = tf.constant([en])
segments_tensors = tf.constant([token_type_ids])
outputs = model(input_ids)
start_scores, end_scores = outputs[:2]
ss = tf.argmax(start_scores.numpy()[0]).numpy()
es = tf.argmax(end_scores.numpy()[0]).numpy()
answer = tokenizer.decode(en[ss: es+1], clean_up_tokenization_spaces=True)
print(answer)
Thank you in advance for your help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2386/comments | https://api.github.com/repos/huggingface/transformers/issues/2386/events | https://github.com/huggingface/transformers/issues/2386 | 544,506,682 | MDU6SXNzdWU1NDQ1MDY2ODI= | 2,386 | Different usage between BertModel and AlbertModel | {
"login": "renjunxiang",
"id": 34116367,
"node_id": "MDQ6VXNlcjM0MTE2MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/34116367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renjunxiang",
"html_url": "https://github.com/renjunxiang",
"followers_url": "https://api.github.com/users/renjunxiang/followers",
"following_url": "https://api.github.com/users/renjunxiang/following{/other_user}",
"gists_url": "https://api.github.com/users/renjunxiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renjunxiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renjunxiang/subscriptions",
"organizations_url": "https://api.github.com/users/renjunxiang/orgs",
"repos_url": "https://api.github.com/users/renjunxiang/repos",
"events_url": "https://api.github.com/users/renjunxiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/renjunxiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you do model.eval() to disable dropout and norm before torch.no_grad()? ",
"Yes. Because they didn‘t’ throw any exception, I'm a little confused about their usage.\r\n```\r\nimport torch\r\nfrom transformers import BertTokenizer, BertModel\r\nfrom transformers import AlbertTokenizer, AlbertModel\r\nfrom transformers import RobertaTokenizer, RobertaModel\r\n\r\ndevice = 'cuda:0'\r\n\r\n# https://storage.googleapis.com/albert_models/albert_base_zh.tar.gz\r\nbert_path = 'D:/pretrain/pytorch/albert_base/'\r\ntokenizer = BertTokenizer.from_pretrained(bert_path)\r\nBERT = AlbertModel.from_pretrained(bert_path) # fixed\r\n\r\n'''\r\nbert_path = 'D:/pretrain/pytorch/albert_base/'\r\ntokenizer = BertTokenizer.from_pretrained(bert_path)\r\nBERT = BertModel.from_pretrained(bert_path) # random output\r\n'''\r\n\r\n'''\r\n# https://drive.google.com/open?id=1eHM3l4fMo6DsQYGmey7UZGiTmQquHw25\r\nbert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/'\r\ntokenizer = BertTokenizer.from_pretrained(bert_path)\r\nBERT = BertModel.from_pretrained(bert_path) # fixed\r\n'''\r\n\r\n'''\r\nbert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/'\r\ntokenizer = BertTokenizer.from_pretrained(bert_path)\r\nBERT = RobertaModel.from_pretrained(bert_path) # random output\r\n'''\r\n\r\nBERT.eval()\r\nBERT = BERT.to(device)\r\n\r\ntext_seqs = []\r\nsegments_ids = []\r\ntext_seq = tokenizer.convert_tokens_to_ids(['[CLS]', '我', '爱', '北', '京', '[SEP]', '[PAD]'])\r\ntext_seqs.append(text_seq)\r\nsegments_ids.append([0] * 7)\r\ntext_seqs = torch.LongTensor(text_seqs).to(device)\r\nsegments_ids = torch.LongTensor(segments_ids).to(device)\r\n\r\nmask_bert = torch.where(text_seqs == 0,\r\n torch.zeros_like(text_seqs),\r\n torch.ones_like(text_seqs))\r\nwith torch.no_grad():\r\n sentence_features, _ = BERT(text_seqs, token_type_ids=segments_ids, attention_mask=mask_bert)\r\nsentence_features = sentence_features[-1]\r\n\r\nfor i in sentence_features:\r\n print(i[:4])\r\n```",
"@renjunxiang, you seem to be using the *same pretrained* checkpoint for both BERT and ALBERT. This should crash as these models are not the same.\r\n\r\nDo you face the same issue when loading from pretrained checkpoints hosted on our S3 (`bert-base-cased` and `albert-base-v2` for example) ?",
"@LysandreJik Yes, I used same pretrained Chinese albert model provided by Google(```albert_base_zh.tar```) and I used ```convert_albert_original_tf_checkpoint_to_pytorch.py``` to transform the model. \r\n\r\nBecause ```BertModel``` and ```AlbertModel``` didn‘t’ throw any exception, I thought they are interchangeable. Maybe the reason of random output is the missing key between ```BertModel``` and ```AlbertModel```? like <https://github.com/huggingface/transformers/issues/2387#issuecomment-571586232>\r\n\r\n```bert-base-cased``` and ```albert-base-v2``` are constrained to the function(```BertModel``` and ```AlbertModel```), so they are not interchangeable.\r\n\r\nIn my past projects, I used ```BertModel.from_pretrained``` to load pretrained model such as ```bert-base-chinese``` and ```chinese_roberta_wwm_ext```. \r\n\r\nI found ```RobertaModel``` could load ```chinese_roberta_wwm_ext``` and didn‘t’ throw any exception, but the output was random.\r\n\r\nSo is there some different usage between ```RobertaModel``` and ```BertModel``` if I want to get the ```last_hidden_states```? In my mind Roberta is one of BERT.\r\n\r\nthanks~\r\n\r\n\r\n",
"It's not really clear what you are trying to say. The models are obviously different, so use the appropriate init for the appropriate model (BERT for BERT weights, RoBERTa for RoBERTa weights). That being said, retrieving the last hidden states should be similar. You can compare the docs:\r\n\r\n- [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html#robertamodel)\r\n- [BERT](https://huggingface.co/transformers/model_doc/bert.html#bertmodel)",
"Thanks! I'll check it out."
] | 1,577 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
Hi~
```
bert_path = 'D:/pretrain/pytorch/albert_base/'
tokenizer = BertTokenizer.from_pretrained(bert_path)
BERT = BertModel.from_pretrained(bert_path)
...
with torch.no_grad():
last_hidden_states = BERT(input_ids)[0]
```
I found ```last_hidden_states``` was not fixed when I reload ```BertModel.from_pretrained(bert_path)```.
```
bert_path = 'D:/pretrain/pytorch/albert_base/'
tokenizer = BertTokenizer.from_pretrained(bert_path)
BERT = AlbertModel.from_pretrained(bert_path)
...
with torch.no_grad():
last_hidden_states = BERT(input_ids)[0]
```
I found ```last_hidden_states ``` was fixed.
But When I tried
```
bert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/'
tokenizer = BertTokenizer.from_pretrained(bert_path)
BERT = RobertaModel.from_pretrained(bert_path)
...
with torch.no_grad():
last_hidden_states = BERT(input_ids)[0]
```
I found ```last_hidden_states``` was still not fixed.
```
bert_path = 'D:/pretrain/pytorch/chinese_roberta_wwm_ext/'
tokenizer = BertTokenizer.from_pretrained(bert_path)
BERT = BertModel.from_pretrained(bert_path)
...
with torch.no_grad():
last_hidden_states = BERT(input_ids)[0]
```
I found ```last_hidden_states``` was fixed.
Is there any difference in their usage between BertModel, AlbertModel and RobertaModel?
In my past projects, I used BERT(freeze)+LSTM. This is the first time to use ALBERT.
Thanks~
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2386/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2385/comments | https://api.github.com/repos/huggingface/transformers/issues/2385/events | https://github.com/huggingface/transformers/issues/2385 | 544,428,158 | MDU6SXNzdWU1NDQ0MjgxNTg= | 2,385 | The method os.rename() in file_utils.py make a permissionError | {
"login": "heroazhe",
"id": 22883367,
"node_id": "MDQ6VXNlcjIyODgzMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22883367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heroazhe",
"html_url": "https://github.com/heroazhe",
"followers_url": "https://api.github.com/users/heroazhe/followers",
"following_url": "https://api.github.com/users/heroazhe/following{/other_user}",
"gists_url": "https://api.github.com/users/heroazhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heroazhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heroazhe/subscriptions",
"organizations_url": "https://api.github.com/users/heroazhe/orgs",
"repos_url": "https://api.github.com/users/heroazhe/repos",
"events_url": "https://api.github.com/users/heroazhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/heroazhe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have the same problem when downloading the pre-trained tokenizer. I also need help!",
"> I have the same problem when downloading the pre-trained tokenizer. I also need help!\r\n\r\nOnline download often occur different problems,so i download it first and use it locally.",
"Ok this should be solved on master now that #2384 is merged"
] | 1,577 | 1,579 | 1,578 | NONE | null | ## ❓ Questions & Help
Is someone happen to this question?
<!-- A clear and concise description of the question. -->

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2384/comments | https://api.github.com/repos/huggingface/transformers/issues/2384/events | https://github.com/huggingface/transformers/pull/2384 | 544,407,392 | MDExOlB1bGxSZXF1ZXN0MzU4NTIzOTMy | 2,384 | Releasing file lock | {
"login": "dimagalat",
"id": 15843978,
"node_id": "MDQ6VXNlcjE1ODQzOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15843978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimagalat",
"html_url": "https://github.com/dimagalat",
"followers_url": "https://api.github.com/users/dimagalat/followers",
"following_url": "https://api.github.com/users/dimagalat/following{/other_user}",
"gists_url": "https://api.github.com/users/dimagalat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimagalat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimagalat/subscriptions",
"organizations_url": "https://api.github.com/users/dimagalat/orgs",
"repos_url": "https://api.github.com/users/dimagalat/repos",
"events_url": "https://api.github.com/users/dimagalat/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimagalat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=h1) Report\n> Merging [#2384](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/80faf22b4ac194061a08fde09ad8b202118c151e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2384 +/- ##\n=======================================\n Coverage 73.24% 73.24% \n=======================================\n Files 87 87 \n Lines 14989 14989 \n=======================================\n Hits 10979 10979 \n Misses 4010 4010\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2384/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `70.33% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=footer). Last update [80faf22...d0e594f](https://codecov.io/gh/huggingface/transformers/pull/2384?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@aaugustin could you have a quick look at this PR related to the filelock system?",
"This isn't directly related to the file lock system. Rather, it's related to moving the file rather than copying it.\r\n\r\nGiven the current implementation, closing the file before moving it (which is all this PR does) looks safe to me. We're still within the lock-protected section.\r\n\r\nCould you take this opportunity remove the following two lines?\r\n\r\n```python\r\n # we are copying the file before closing it, so flush to avoid truncation\r\n temp_file.flush()\r\n```\r\n\r\nIndeed, you're now closing the file before copying it. (To be honest, I should have removed them when I stopped copying the file and started moving it instead.)",
"@aaugustin I agree, `.flush()` is unnecessary, thanks for pointing it out. I've made the change.",
"Ok great thanks @dimagalat and @aaugustin"
] | 1,577 | 1,579 | 1,579 | CONTRIBUTOR | null | `With` scope creates a file lock, which leads to the following error:
INFO:filelock:Lock 1408081097608 released on C:\Users\dimag\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.lock
Traceback (most recent call last):
File "C:\Users\dimag\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py", line 398, in _from_pretrained
resume_download=resume_download,
File "C:\Users\dimag\Anaconda3\envs\pytorch\lib\site-packages\transformers\file_utils.py", line 212, in cached_path
user_agent=user_agent,
File "C:\Users\dimag\Anaconda3\envs\pytorch\lib\site-packages\transformers\file_utils.py", line 392, in get_from_cache
os.rename(temp_file.name, cache_path)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\dimag\\.cache\\torch\\transformers\\tmpnhzxze8u' -> 'C:\\Users\\dimag\\.cache\\torch\\transformers\\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2384/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2384/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2384",
"html_url": "https://github.com/huggingface/transformers/pull/2384",
"diff_url": "https://github.com/huggingface/transformers/pull/2384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2384.patch",
"merged_at": 1579004342000
} |
https://api.github.com/repos/huggingface/transformers/issues/2383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2383/comments | https://api.github.com/repos/huggingface/transformers/issues/2383/events | https://github.com/huggingface/transformers/issues/2383 | 544,403,685 | MDU6SXNzdWU1NDQ0MDM2ODU= | 2,383 | clarification on output | {
"login": "vr25",
"id": 22553367,
"node_id": "MDQ6VXNlcjIyNTUzMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22553367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vr25",
"html_url": "https://github.com/vr25",
"followers_url": "https://api.github.com/users/vr25/followers",
"following_url": "https://api.github.com/users/vr25/following{/other_user}",
"gists_url": "https://api.github.com/users/vr25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vr25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vr25/subscriptions",
"organizations_url": "https://api.github.com/users/vr25/orgs",
"repos_url": "https://api.github.com/users/vr25/repos",
"events_url": "https://api.github.com/users/vr25/events{/privacy}",
"received_events_url": "https://api.github.com/users/vr25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | NONE | null | Hi,
On using bert_uncased on the following sentence:
`Hello this is my dog`
and to get attentions I use:
`last_hidden_states, pooler_outputs, hidden_states, attentions = outputs`
`attentions` gives:
a tuple of 12 tensors where each tensor is of size [1,12,5,5]
I was wondering if the 12 tensors in the tuple are for each attention head or hidden layer.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2382/comments | https://api.github.com/repos/huggingface/transformers/issues/2382/events | https://github.com/huggingface/transformers/pull/2382 | 544,391,545 | MDExOlB1bGxSZXF1ZXN0MzU4NTEyNjIx | 2,382 | Proposition to include community models in readme | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=h1) Report\n> Merging [#2382](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/629b22adcfe340c4e3babac83654da2fbd1bbf89?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2382 +/- ##\n=======================================\n Coverage 73.24% 73.24% \n=======================================\n Files 87 87 \n Lines 14989 14989 \n=======================================\n Hits 10979 10979 \n Misses 4010 4010\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=footer). Last update [629b22a...a229a68](https://codecov.io/gh/huggingface/transformers/pull/2382?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,578 | 1,578 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2382/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2382",
"html_url": "https://github.com/huggingface/transformers/pull/2382",
"diff_url": "https://github.com/huggingface/transformers/pull/2382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2382.patch",
"merged_at": 1578245832000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2381/comments | https://api.github.com/repos/huggingface/transformers/issues/2381/events | https://github.com/huggingface/transformers/issues/2381 | 544,371,457 | MDU6SXNzdWU1NDQzNzE0NTc= | 2,381 | how to use distilledgpt2 | {
"login": "jackfeinmann5",
"id": 59409879,
"node_id": "MDQ6VXNlcjU5NDA5ODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/59409879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackfeinmann5",
"html_url": "https://github.com/jackfeinmann5",
"followers_url": "https://api.github.com/users/jackfeinmann5/followers",
"following_url": "https://api.github.com/users/jackfeinmann5/following{/other_user}",
"gists_url": "https://api.github.com/users/jackfeinmann5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackfeinmann5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackfeinmann5/subscriptions",
"organizations_url": "https://api.github.com/users/jackfeinmann5/orgs",
"repos_url": "https://api.github.com/users/jackfeinmann5/repos",
"events_url": "https://api.github.com/users/jackfeinmann5/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackfeinmann5/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, you can use it as such:\r\n\r\n```py\r\nfrom transformers import GPT2Model, GPT2Tokenizer\r\n\r\nmodel = GPT2Model.from_pretrained(\"distilgpt2\")\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"distilgpt2\")\r\n```\r\n\r\nYou can see the list of available models in the [pretrained section of our documentation](https://huggingface.co/transformers/pretrained_models.html).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | Hi
I want to use distilledgpt2, I cannot see the config file and modeling files, could you please assist me how to use it
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2380/comments | https://api.github.com/repos/huggingface/transformers/issues/2380/events | https://github.com/huggingface/transformers/issues/2380 | 544,369,808 | MDU6SXNzdWU1NDQzNjk4MDg= | 2,380 | errors encountered with run_lm_finetuning.py | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello I also got the same error while running BERT.\r\n\r\nTraceback (most recent call last):\r\n File \"code/transformers-2.3.0/examples/run_lm_finetuning.py\", line 713, in <module>\r\n main()\r\n File \"code/transformers-2.3.0/examples/run_lm_finetuning.py\", line 663, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"code/transformers-2.3.0/examples/run_lm_finetuning.py\", line 268, in train\r\n global_step = int(args.model_name_or_path.split(\"-\")[-1].split(\"/\")[0])\r\nValueError: invalid literal for int() with base 10: 'pytorch'\r\n\r\nCould anyone help?",
"@calusbr \r\nHi, for the error you reported if you set global_step = 0 it should work. ",
"Hi, thank you for raising this issue. Could you please let me know if 27c1b656cca75efa0cc414d3bf4e6aacf24829de fixed this issue by trying the updated script?",
"Hello, to solve this problem I added my checkpoint to a folder that has the same Transformer output.\r\n\r\n**new folder -> chekpoint-0**\r\n\r\nFolders:\r\n|\r\nchekpoint-0\r\n | vocab.txt\r\n | pytorch_model.bin\r\n | config.json\r\n\r\nglobal_step = int(args.model_name_or_path.split(\"-\")[-1].split(\"/\")[0])\r\n\r\n**Result:\r\nglobal_step = 0**\r\n",
"@rabeehk hello! I am also faced with the \"ValueError: num_samples should be a positive integeral value, but got num_samples=0\", Have you fixed this problem? thank you~",
"@LysandreJik I tried it 2020-1-9, It seems that this problem \"ValueError: num_samples should be a positive integeral value, but got num_samples=0\" still exists...",
"Hi\nI tested it, it does fix the first issue, thanks, but as I wrote in the\nfirst email, there are a couple of more errors, currently\nI got this errors, thanks:\n\n(transformer) rkarimi@vgnc002:/idiap/user/rkarimi/dev/lm_heads$ python\nrun_lm_original.py --output_dir=/idiap/temp/rkarimi/lm_heads/bert_original\n --model_type=bert\n--model_name_or_path=/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/\n --do_train\n --train_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw\n --do_eval\n --eval_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw\n--mlm --block_size 510 --overwrite_output_dir --num_train_epochs 3\n--evaluate_during_training\n01/09/2020 09:37:59 - WARNING - __main__ - Process rank: -1, device:\ncuda, n_gpu: 1, distributed training: False, 16-bits training: False\n01/09/2020 09:37:59 - INFO - transformers.configuration_utils - loading\nconfiguration file\n/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/config.json\n01/09/2020 09:37:59 - INFO - transformers.configuration_utils - Model\nconfig {\n \"attention_probs_dropout_prob\": 0.1,\n \"finetuning_task\": null,\n \"hidden_act\": \"gelu\",\n \"hidden_dropout_prob\": 0.1,\n \"hidden_size\": 768,\n \"id2label\": {\n \"0\": \"LABEL_0\",\n \"1\": \"LABEL_1\"\n },\n \"initializer_range\": 0.02,\n \"intermediate_size\": 3072,\n \"is_decoder\": false,\n \"label2id\": {\n \"LABEL_0\": 0,\n \"LABEL_1\": 1\n },\n \"layer_norm_eps\": 1e-12,\n \"max_position_embeddings\": 512,\n \"num_attention_heads\": 12,\n \"num_hidden_layers\": 12,\n \"num_labels\": 2,\n \"output_attentions\": false,\n \"output_hidden_states\": false,\n \"output_past\": true,\n \"pruned_heads\": {},\n \"torchscript\": false,\n \"type_vocab_size\": 2,\n \"use_bfloat16\": false,\n \"vocab_size\": 30522\n}\n\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Model name\n'/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/' not found\nin model shortcut name list (bert-base-uncased, bert-large-uncased,\nbert-base-cased, bert-large-cased, bert-base-multilingual-uncased,\nbert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased,\nbert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking,\nbert-large-uncased-whole-word-masking-finetuned-squad,\nbert-large-cased-whole-word-masking-finetuned-squad,\nbert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased,\nbert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1,\nbert-base-finnish-uncased-v1). Assuming\n'/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/' is a path\nor url to a directory containing tokenizer files.\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Didn't\nfind file\n/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/added_tokens.json.\nWe won't load it.\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Didn't\nfind file\n/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/special_tokens_map.json.\nWe won't load it.\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - Didn't\nfind file\n/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/tokenizer_config.json.\nWe won't load it.\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading\nfile /idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/vocab.txt\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading\nfile None\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading\nfile None\n01/09/2020 09:37:59 - INFO - transformers.tokenization_utils - loading\nfile None\n01/09/2020 09:37:59 - INFO - transformers.modeling_utils - loading\nweights file\n/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/pytorch_model.bin\n01/09/2020 09:38:04 - INFO - transformers.modeling_utils - Weights from\npretrained model not used in BertForMaskedLM:\n['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\n01/09/2020 09:38:09 - INFO - __main__ - Training/evaluation parameters\nNamespace(adam_epsilon=1e-08, block_size=510, cache_dir='', config_name='',\ndevice=device(type='cuda'), do_eval=True, do_lower_case=False,\ndo_train=True, eval_all_checkpoints=False,\neval_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw',\nevaluate_during_training=True, fp16=False, fp16_opt_level='O1',\ngradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1,\nlogging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=True,\nmlm_probability=0.15,\nmodel_name_or_path='/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/',\nmodel_type='bert', n_gpu=1, no_cuda=False, num_train_epochs=3.0,\noutput_dir='/idiap/temp/rkarimi/lm_heads/bert_original',\noverwrite_cache=False, overwrite_output_dir=True,\nper_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=50,\nsave_total_limit=None, seed=42, server_ip='', server_port='',\ntokenizer_name='',\ntrain_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw',\nwarmup_steps=0, weight_decay=0.0)\n01/09/2020 09:38:09 - INFO - __main__ - Loading features from cached file\n/idiap/temp/rkarimi/pretrained_transformers/bert-base-uncased/_cached_lm_510_wiki.train.raw\n01/09/2020 09:38:09 - INFO - __main__ - ***** Running training *****\n01/09/2020 09:38:09 - INFO - __main__ - Num examples = 4312\n01/09/2020 09:38:09 - INFO - __main__ - Num Epochs = 3\n01/09/2020 09:38:09 - INFO - __main__ - Instantaneous batch size per\nGPU = 4\n01/09/2020 09:38:09 - INFO - __main__ - Total train batch size (w.\nparallel, distributed & accumulation) = 4\n01/09/2020 09:38:09 - INFO - __main__ - Gradient Accumulation steps = 1\n01/09/2020 09:38:09 - INFO - __main__ - Total optimization steps = 3234\n01/09/2020 09:38:09 - INFO - __main__ - Starting fine-tuning.\nEpoch: 0%|\n\n | 0/3 [00:00<?,\n?it/s/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes`\nfailed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\n/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THCUNN/ClassNLLCriterion.cu:105:\nvoid cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *,\nlong *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype =\nfloat]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t <\nn_classes` failed.\nTraceback (most recent call last):\n File \"run_lm_original.py\", line 717, in <module>\n main()\n File \"run_lm_original.py\", line 667, in main\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\n File \"run_lm_original.py\", line 316, in train\n loss.backward()\n File\n\"/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/tensor.py\",\nline 118, in backward\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\n File\n\"/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/autograd/__init__.py\",\nline 93, in backward\n allow_unreachable=True) # allow_unreachable flag\nRuntimeError: merge_sort: failed to synchronize: device-side assert\ntriggered\nEpoch: 0%|\n\n | 0/3 [00:00<?, ?it/s]\nIteration: 0%|\n\nBest\nRabeeh\n\n\nOn Tue, Jan 7, 2020 at 4:19 PM Lysandre Debut <[email protected]>\nwrote:\n\n> Hi, thank you for raising this issue. Could you please let me know if\n> 27c1b65\n> <https://github.com/huggingface/transformers/commit/27c1b656cca75efa0cc414d3bf4e6aacf24829de>\n> fixed this issue by trying the updated script?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2380?email_source=notifications&email_token=ABP4ZCFDVP5F63P244QV3EDQ4SMPHA5CNFSM4KB3TOB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIJGNHA#issuecomment-571631260>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGGP5IA3UW4OEN6DLQ4SMPHANCNFSM4KB3TOBQ>\n> .\n>\n",
"@rabeehk, concerning your first issue:\r\n\r\n> block_size value is by default = -1, which creates the following error, can be solved by setting the default value to 512\r\n\r\n[the very first usage of `args.block_size`](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L639-L642) is to check if it is a negative value (e.g. -1) and to put it to the maximum model length. Is this not working in your case?\r\n\r\n> The issue will resolve by setting smaller block_size <= 510, it would be very nice to document this in the codes that one needs to set the block_size <= 510 as a temporary solution. thanks\r\n\r\nThis should be solved by the previously mentioned lines as well.\r\n\r\n> In mask_tokens function, the following lines needs to be set to -1 not -100 which is the ignore_index used in the \"BertForMaskedLM\" model:\r\nlabels[~masked_indices] = -100 => -1\r\n\r\nThis is not the case anymore, as you can see in the [`BertForMaskedLM` source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1001). The examples are maintained to work with the current `master` branch, and not a specific release. If you want to run scripts with a specific version, you can get them from a specific version tag on GitHub, e.g. [version 2.3.0](https://github.com/huggingface/transformers/tree/v2.3.0).\r\n\r\nPlease let me know if you can see why the block size doesn't seem to be set to the maximum value, I'll fix it if it is an issue with the script. Thank you @rabeehk!",
"@rabeehk Hi ! May I ask you that you fixed the problem \"\"ValueError: num_samples should be a positive integeral value, but got num_samples=0\" because you set the \"global_step = 0\" ? like this:\r\n\r\n`try:\r\n # set global_step to gobal_step of last saved checkpoint from model path\r\n\r\n checkpoint_suffix = args.model_name_or_path.split(\"-\")[-1].split(\"/\")[0]\r\n\r\n global_step = int(checkpoint_suffix)\r\n\r\n epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)\r\n\r\n steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)`\r\n \r\n Should I change the \"global_step = int(checkpoint_suffix)\" to \"global_step = 0\" ? thanks !",
"Hi\nNo. You need to set block-size to a positive number try with 510 maybe.\nBest\nRabeeh\n\nOn Thu, Jan 9, 2020, 12:14 PM JiangYanting <[email protected]> wrote:\n\n> @rabeehk <https://github.com/rabeehk> Hi ! May I ask you that you fixed\n> the problem \"\"ValueError: num_samples should be a positive integeral value,\n> but got num_samples=0\" because you set the \"global_step = 0\" ? like this:\n>\n> try: # set global_step to gobal_step of last saved checkpoint from model\n> path checkpoint_suffix =\n> args.model_name_or_path.split(\"-\")[-1].split(\"/\")[0] global_step =\n> int(checkpoint_suffix) epochs_trained = global_step //\n> (len(train_dataloader) // args.gradient_accumulation_steps)\n> steps_trained_in_current_epoch = global_step % (len(train_dataloader) //\n> args.gradient_accumulation_steps)\n>\n> Should I change the \"global_step = int(checkpoint_suffix)\" to \"global_step\n> = 0\" ? thanks !\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2380?email_source=notifications&email_token=ABP4ZCBCKG7SAYK4YPHVPFTQ44BJJA5CNFSM4KB3TOB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIP6HDQ#issuecomment-572515214>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCB3F7ZALBP6RG4HI63Q44BJJANCNFSM4KB3TOBQ>\n> .\n>\n",
"Changing from 512 to 510 worked for me. I would think that we should be able to use 512, the max size for Bert input? Or there something I'm overlooking? ",
"Hi, I just encountered the same error finetuning a custom gpt-2 model with `run_language_modeling.py` on Colab.\r\n```\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 799, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 749, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_language_modeling.py\", line 245, in train\r\n train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py\", line 94, in __init__\r\n \"value, but got num_samples={}\".format(self.num_samples))\r\nValueError: num_samples should be a positive integer value, but got num_samples=0\r\n```\r\nI solved by specifying the `--block_size`, as @rabeehk said. \r\nMight be worth mentioning that in [your docs](https://huggingface.co/transformers/examples.html#gpt-2-gpt-and-causal-language-modeling), or have a default setup that works out of the box ? I also had to dig into the code to find the `--should_continue` and `--overwrite_output_dir` flags to continue training, is there a page where that is discussed by any chance? \r\n\r\nAs an aside, I can't seem to find a flag to print the loss during training? I see there is a log/save step every 500 iterations, but it doesn't give any of these stats. Is there something super obvious I am missing?\r\n\r\nThanks in any case!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,589 | 1,589 | NONE | null | Hi
I am using run_lm_finetuning.py, I encountered the following issues:
- block_size value is by default = -1, which creates the following error, can be solved by setting the default value to 512:
```
File "run_lm_finetuning.py", line 712, in <module>
main()
File "run_lm_finetuning.py", line 662, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 198, in train
train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer36/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 64, in __init__
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integeral value, but got num_samples=0
```
- global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0]) can crash, let assume the "args.model_name_or_path=gpt2" then the result of the expression is int(""), which will crash, maybe setting it to 0?
- when running the script for bert model I got also the following error, I am using pytorch 1.2.
```
(transformer) rkarimi@italix17:/idiap/user/rkarimi/dev/lm_heads$ python run_lm_finetuning.py --output_dir=/idiap/temp/rkarimi/lm_heads/distilbert --model_type=distilbert --model_name_or_path=/idiap/temp/rkarimi/pretrained_transformers/bert_distil --do_train --train_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw --do_eval --eval_data_file=/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw --mlm --block_size=511
To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html
01/02/2020 16:53:27 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False
01/02/2020 16:53:27 - INFO - transformers.configuration_utils - loading configuration file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/config.json
01/02/2020 16:53:27 - INFO - transformers.configuration_utils - Model config {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"finetuning_task": null,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"max_position_embeddings": 512,
"n_heads": 12,
"n_layers": 6,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 30522
}
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Model name '/idiap/temp/rkarimi/pretrained_transformers/bert_distil' not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming '/idiap/temp/rkarimi/pretrained_transformers/bert_distil' is a path or url to a directory containing tokenizer files.
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/added_tokens.json. We won't load it.
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/special_tokens_map.json. We won't load it.
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - Didn't find file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/tokenizer_config.json. We won't load it.
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/vocab.txt
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file None
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file None
01/02/2020 16:53:27 - INFO - transformers.tokenization_utils - loading file None
01/02/2020 16:53:27 - INFO - transformers.modeling_utils - loading weights file /idiap/temp/rkarimi/pretrained_transformers/bert_distil/pytorch_model.bin
01/02/2020 16:53:28 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=511, cache_dir='', config_name='', device=device(type='cpu'), do_eval=True, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.test.raw', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='/idiap/temp/rkarimi/pretrained_transformers/bert_distil', model_type='distilbert', n_gpu=0, no_cuda=False, num_train_epochs=1.0, output_dir='/idiap/temp/rkarimi/lm_heads/distilbert', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='/idiap/temp/rkarimi/resources/wikitext-2-raw/wiki.train.raw', warmup_steps=0, weight_decay=0.0)
01/02/2020 16:53:28 - INFO - __main__ - Creating features from dataset file at /idiap/temp/rkarimi/resources/wikitext-2-raw
01/02/2020 16:53:53 - INFO - __main__ - Saving features into cached file /idiap/temp/rkarimi/pretrained_transformers/bert_distil_cached_lm_511_wiki.train.raw
01/02/2020 16:53:53 - INFO - __main__ - ***** Running training *****
01/02/2020 16:53:53 - INFO - __main__ - Num examples = 4303
01/02/2020 16:53:53 - INFO - __main__ - Num Epochs = 1
01/02/2020 16:53:53 - INFO - __main__ - Instantaneous batch size per GPU = 4
01/02/2020 16:53:53 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
01/02/2020 16:53:53 - INFO - __main__ - Gradient Accumulation steps = 1
01/02/2020 16:53:53 - INFO - __main__ - Total optimization steps = 1076
01/02/2020 16:53:53 - INFO - __main__ - Continuing training from checkpoint, will skip to saved global_step
01/02/2020 16:53:53 - INFO - __main__ - Continuing training from epoch 0
01/02/2020 16:53:53 - INFO - __main__ - Continuing training from global step 0
01/02/2020 16:53:53 - INFO - __main__ - Will skip the first 0 steps in the first epoch
Epoch: 0%| | 0/1 [00:00<?, ?it/sTraceback (most recent call last): | 0/1076 [00:00<?, ?it/s]
File "run_lm_finetuning.py", line 738, in <module>
main()
File "run_lm_finetuning.py", line 688, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 325, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 540, in forward
inputs_embeds=inputs_embeds)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 477, in forward
inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 96, in forward
position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/idiap/user/rkarimi/libs/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/functional.py", line 1467, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237
```
The issue will resolve by setting smaller block_size <= 510, it would be very nice to document this in the codes that one needs to set the block_size <= 510 as a temporary solution. thanks
- In mask_tokens function, the following lines needs to be set to -1 not -100 which is the ignore_index used in the "BertForMaskedLM" model:
labels[~masked_indices] = -100 => -1
thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2380/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2379/comments | https://api.github.com/repos/huggingface/transformers/issues/2379/events | https://github.com/huggingface/transformers/issues/2379 | 544,338,566 | MDU6SXNzdWU1NDQzMzg1NjY= | 2,379 | finetune transformer | {
"login": "jackfeinmann5",
"id": 59409879,
"node_id": "MDQ6VXNlcjU5NDA5ODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/59409879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackfeinmann5",
"html_url": "https://github.com/jackfeinmann5",
"followers_url": "https://api.github.com/users/jackfeinmann5/followers",
"following_url": "https://api.github.com/users/jackfeinmann5/following{/other_user}",
"gists_url": "https://api.github.com/users/jackfeinmann5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackfeinmann5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackfeinmann5/subscriptions",
"organizations_url": "https://api.github.com/users/jackfeinmann5/orgs",
"repos_url": "https://api.github.com/users/jackfeinmann5/repos",
"events_url": "https://api.github.com/users/jackfeinmann5/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackfeinmann5/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | Hi
I greatly appreciate showing me how to pretrain a transformer model like BERT, I mean not finetuning but pretraining, Is there any code in your repo doing this? thanks a lot for your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2378/comments | https://api.github.com/repos/huggingface/transformers/issues/2378/events | https://github.com/huggingface/transformers/pull/2378 | 544,239,450 | MDExOlB1bGxSZXF1ZXN0MzU4Mzk4MDI3 | 2,378 | added pad_to_max_length option to batch_encode_plus | {
"login": "ameasure",
"id": 571959,
"node_id": "MDQ6VXNlcjU3MTk1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/571959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameasure",
"html_url": "https://github.com/ameasure",
"followers_url": "https://api.github.com/users/ameasure/followers",
"following_url": "https://api.github.com/users/ameasure/following{/other_user}",
"gists_url": "https://api.github.com/users/ameasure/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ameasure/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ameasure/subscriptions",
"organizations_url": "https://api.github.com/users/ameasure/orgs",
"repos_url": "https://api.github.com/users/ameasure/repos",
"events_url": "https://api.github.com/users/ameasure/events{/privacy}",
"received_events_url": "https://api.github.com/users/ameasure/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Thanks @ameasure, do you think you could run the quality tool as defined in the contributing guidelines for that test `check_code_quality` to pass?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2378",
"html_url": "https://github.com/huggingface/transformers/pull/2378",
"diff_url": "https://github.com/huggingface/transformers/pull/2378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2378.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2377/comments | https://api.github.com/repos/huggingface/transformers/issues/2377/events | https://github.com/huggingface/transformers/pull/2377 | 544,177,875 | MDExOlB1bGxSZXF1ZXN0MzU4MzQ4NDI1 | 2,377 | Text generation on GPU: Moved the encoded_prompt to correct device | {
"login": "alberduris",
"id": 7073086,
"node_id": "MDQ6VXNlcjcwNzMwODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7073086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alberduris",
"html_url": "https://github.com/alberduris",
"followers_url": "https://api.github.com/users/alberduris/followers",
"following_url": "https://api.github.com/users/alberduris/following{/other_user}",
"gists_url": "https://api.github.com/users/alberduris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alberduris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alberduris/subscriptions",
"organizations_url": "https://api.github.com/users/alberduris/orgs",
"repos_url": "https://api.github.com/users/alberduris/repos",
"events_url": "https://api.github.com/users/alberduris/events{/privacy}",
"received_events_url": "https://api.github.com/users/alberduris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @alberduris !"
] | 1,577 | 1,578 | 1,578 | CONTRIBUTOR | null | Moved the `encoded_prompt` to the correct device to solve the problem when using GPU.
Solves the problem mentioned in #227 #1414 #2360 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2377/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2377/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2377",
"html_url": "https://github.com/huggingface/transformers/pull/2377",
"diff_url": "https://github.com/huggingface/transformers/pull/2377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2377.patch",
"merged_at": 1578319873000
} |
https://api.github.com/repos/huggingface/transformers/issues/2376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2376/comments | https://api.github.com/repos/huggingface/transformers/issues/2376/events | https://github.com/huggingface/transformers/issues/2376 | 544,177,102 | MDU6SXNzdWU1NDQxNzcxMDI= | 2,376 | Classification of sentence pair with two different languages | {
"login": "chiragsanghvi10",
"id": 45583446,
"node_id": "MDQ6VXNlcjQ1NTgzNDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/45583446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragsanghvi10",
"html_url": "https://github.com/chiragsanghvi10",
"followers_url": "https://api.github.com/users/chiragsanghvi10/followers",
"following_url": "https://api.github.com/users/chiragsanghvi10/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragsanghvi10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragsanghvi10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragsanghvi10/subscriptions",
"organizations_url": "https://api.github.com/users/chiragsanghvi10/orgs",
"repos_url": "https://api.github.com/users/chiragsanghvi10/repos",
"events_url": "https://api.github.com/users/chiragsanghvi10/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragsanghvi10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | I have been working on multi-lingual sentence similarity (English-Hindi)
### for example:
> Sentence 1 (English)
> Sentence 2 (Translation in Hindi of Sentence 1)
> Sentence Similarity Score.
Any idea on how do I train for sentence similarity using `xlm-mlm-xnli15-1024`?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2375/comments | https://api.github.com/repos/huggingface/transformers/issues/2375/events | https://github.com/huggingface/transformers/issues/2375 | 544,120,731 | MDU6SXNzdWU1NDQxMjA3MzE= | 2,375 | Is the position of the scheduler.step() correct? | {
"login": "AdityaSoni19031997",
"id": 22738086,
"node_id": "MDQ6VXNlcjIyNzM4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22738086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdityaSoni19031997",
"html_url": "https://github.com/AdityaSoni19031997",
"followers_url": "https://api.github.com/users/AdityaSoni19031997/followers",
"following_url": "https://api.github.com/users/AdityaSoni19031997/following{/other_user}",
"gists_url": "https://api.github.com/users/AdityaSoni19031997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdityaSoni19031997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdityaSoni19031997/subscriptions",
"organizations_url": "https://api.github.com/users/AdityaSoni19031997/orgs",
"repos_url": "https://api.github.com/users/AdityaSoni19031997/repos",
"events_url": "https://api.github.com/users/AdityaSoni19031997/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdityaSoni19031997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Why this issue were left without an answer? I don't know what is the answer, but it seems that this scheduler should be executed at batch-level since there are other examples using batch-level instead of epoch-level: https://github.com/huggingface/transformers/blob/8e8384663d716d4b5a4f510070ff954fc0ba4a52/examples/research_projects/bert-loses-patience/run_glue_with_pabee.py.\r\n\r\nAnyway, LambdaLR is a generic LR scheduler which PyTorch provides in order to implement other LR schedulers. It is true that PyTorch docs talks about epoch-level, but there are other LR schedulers, like [CyclicLR scheduler](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html), which explicitely indicates that has to be executed at batch-level. Since LambdaLR is a generic LR scheduler, this scheduler will need to be executed at batch-level or epoch-level depending on the specifid LR scheduler implemented. For the linear LR scheduler of the issue, I guess that the correct is to be executed at batch-level, or even could be adapted to epoch level if you want, but taking a look at the scripts of the repository, they use batch-level."
] | 1,577 | 1,658 | 1,577 | CONTRIBUTOR | null | ## ❓ Questions & Help
In the [lm_finetuning_file](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L285-L319), the scheduler used is "get_linear_schedule_with_warmup" which in turn uses the underlying "LambdaLR" [ref](https://github.com/huggingface/transformers/blob/594ca6deadb6bb79451c3093641e3c9e5dcfa446/src/transformers/optimization.py#L47);
On a careful study of the file, the "scheduler" is being called for every batch, but that isn't what's as per the official PyTorch [docs](https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.LambdaLR) to change LR since they are calling it at "epoch-level",
```
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95 ** epoch
>>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
```
I meant it makes sense to me to do it at batch level change the lr etc but not sure why the PyTorch has it differently?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2374/comments | https://api.github.com/repos/huggingface/transformers/issues/2374/events | https://github.com/huggingface/transformers/issues/2374 | 544,074,027 | MDU6SXNzdWU1NDQwNzQwMjc= | 2,374 | Fine-tuning BertAbs on new dataset? | {
"login": "ehsan-soe",
"id": 12740904,
"node_id": "MDQ6VXNlcjEyNzQwOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/12740904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehsan-soe",
"html_url": "https://github.com/ehsan-soe",
"followers_url": "https://api.github.com/users/ehsan-soe/followers",
"following_url": "https://api.github.com/users/ehsan-soe/following{/other_user}",
"gists_url": "https://api.github.com/users/ehsan-soe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehsan-soe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehsan-soe/subscriptions",
"organizations_url": "https://api.github.com/users/ehsan-soe/orgs",
"repos_url": "https://api.github.com/users/ehsan-soe/repos",
"events_url": "https://api.github.com/users/ehsan-soe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehsan-soe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"duplicate of #2597, no update yet sadly."
] | 1,577 | 1,583 | 1,583 | NONE | null | ## 🚀 Feature
Hi,
I wonder if there can be a script for fine-tuning BertAbs on new dataset?
Or if you have some hint to provide about this task? Not sure how to access loss from ```modeling_bertabs.py```.
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2374/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2373/comments | https://api.github.com/repos/huggingface/transformers/issues/2373/events | https://github.com/huggingface/transformers/issues/2373 | 544,060,016 | MDU6SXNzdWU1NDQwNjAwMTY= | 2,373 | RuntimeError: The size of tensor a (30524) must match the size of tensor b (30522) at non-singleton dimension 2 --- run_lm_finetuning.py | {
"login": "ehsan-soe",
"id": 12740904,
"node_id": "MDQ6VXNlcjEyNzQwOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/12740904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehsan-soe",
"html_url": "https://github.com/ehsan-soe",
"followers_url": "https://api.github.com/users/ehsan-soe/followers",
"following_url": "https://api.github.com/users/ehsan-soe/following{/other_user}",
"gists_url": "https://api.github.com/users/ehsan-soe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehsan-soe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehsan-soe/subscriptions",
"organizations_url": "https://api.github.com/users/ehsan-soe/orgs",
"repos_url": "https://api.github.com/users/ehsan-soe/repos",
"events_url": "https://api.github.com/users/ehsan-soe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehsan-soe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you share the full command you are using to run the script?",
"> Can you share the full command you are using to run the script?\r\n\r\n> Can you share the full command you are using to run the script?\r\n\r\nHi sure, this is the command (basically the same as the document):\r\n```\r\npython run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$T\r\nEST_FILE --mlm --output_dir=$OUTPUT_DIR/bert-fine --num_train_epochs 3\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## 🐛 Bug
I am using ```run_lm_finetuning.py``` to fine-tune bert-base-uncased on my dataset and I am getting the following error:
I also truncated my dataset to have num examples dividable by the batch_size.
Note that fine-tuning gpt2 on the same dataset works fine.
```
12/30/2019 17:23:28 - INFO - __main__ - ***** Running training *****
12/30/2019 17:23:28 - INFO - __main__ - Num examples = 4048
12/30/2019 17:23:28 - INFO - __main__ - Num Epochs = 3
12/30/2019 17:23:28 - INFO - __main__ - Instantaneous batch size per GPU = 4
12/30/2019 17:23:28 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
12/30/2019 17:23:28 - INFO - __main__ - Gradient Accumulation steps = 1
12/30/2019 17:23:28 - INFO - __main__ - Total optimization steps = 3036
Epoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/1012 [00:00<?, ?it/s]
File "run_lm_finetuning.py", line 722, in <module>
main()
File "run_lm_finetuning.py", line 672, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 306, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_bert.py", line 990, in forward
prediction_scores = self.cls(sequence_output)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_bert.py", line 496, in forward
prediction_scores = self.predictions(sequence_output)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_bert.py", line 486, in forward
hidden_states = self.decoder(hidden_states) + self.bias
RuntimeError: The size of tensor a (30524) must match the size of tensor b (30522) at non-singleton dimension 2
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu
* Python version: python3.6
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch):
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2373/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2372/comments | https://api.github.com/repos/huggingface/transformers/issues/2372/events | https://github.com/huggingface/transformers/issues/2372 | 544,030,819 | MDU6SXNzdWU1NDQwMzA4MTk= | 2,372 | What is the "could not find answer" warning in squad.py | {
"login": "cppntn",
"id": 26765504,
"node_id": "MDQ6VXNlcjI2NzY1NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cppntn",
"html_url": "https://github.com/cppntn",
"followers_url": "https://api.github.com/users/cppntn/followers",
"following_url": "https://api.github.com/users/cppntn/following{/other_user}",
"gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cppntn/subscriptions",
"organizations_url": "https://api.github.com/users/cppntn/orgs",
"repos_url": "https://api.github.com/users/cppntn/repos",
"events_url": "https://api.github.com/users/cppntn/events{/privacy}",
"received_events_url": "https://api.github.com/users/cppntn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This means that the script that converts the examples to features can't find the answers it should be finding. Rather than trying to predict those, it ignores them.\r\n\r\nThis means that these examples won't be used for training, reducing the total number of examples that will be used. If it is a small portion of the total number of examples, it shouldn't impact the resulting accuracy much. If it is a significant portion of the examples then it would be a good idea to look into it to see if there's a quick fix.",
"Hi @LysandreJik, thanks for the clarification. I noticed that for some of my data it happens that the the \"text\" field in \"answers\" field may differ from the one present in the \"context\" just because of some upper/lower letters mismatch. Do you think this could be avoided by using an uncased model?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@antocapp I had your same logs with a cased model. Now I'm using an uncased model, putting the flag `--do_lower_case` in [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) I expected not to have those warnings, instead they appeared anyway. \r\n\r\nI took a look in the doc and I saw that in [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py), examples are passed to the function `squad_convert_examples_to_features` [in this line](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py#L448).\r\n\r\n```python\r\nfeatures, dataset = squad_convert_examples_to_features(\r\n examples=examples,\r\n tokenizer=tokenizer,\r\n max_seq_length=args.max_seq_length,\r\n doc_stride=args.doc_stride,\r\n max_query_length=args.max_query_length,\r\n is_training=not evaluate,\r\n return_dataset=\"pt\",\r\n threads=args.threads,\r\n )\r\n```\r\n\r\nThe tokenizer is created passing the argument `--do_lower_case` so it should tokenize putting the lower case to every token. Anyway the warning you see comes within [squad_convert_example_to_features](https://github.com/huggingface/transformers/blob/930153e7d2d658267b7630a047a4bfc85b86042d/src/transformers/data/processors/squad.py#L91) declaration.\r\n\r\n```python\r\ndef squad_convert_example_to_features(\r\n example, max_seq_length, doc_stride, max_query_length, padding_strategy, is_training\r\n):\r\n features = []\r\n if is_training and not example.is_impossible:\r\n # Get start and end position\r\n start_position = example.start_position\r\n end_position = example.end_position\r\n\r\n # If the answer cannot be found in the text, then skip this example.\r\n actual_text = \" \".join(example.doc_tokens[start_position : (end_position + 1)])\r\n cleaned_answer_text = \" \".join(whitespace_tokenize(example.answer_text))\r\n if actual_text.find(cleaned_answer_text) == -1:\r\n logger.warning(\"Could not find answer: '%s' vs. '%s'\", actual_text, cleaned_answer_text)\r\n return []\r\n\r\n tok_to_orig_index = []\r\n orig_to_tok_index = []\r\n all_doc_tokens = []\r\n for (i, token) in enumerate(example.doc_tokens):\r\n orig_to_tok_index.append(len(all_doc_tokens))\r\n sub_tokens = tokenizer.tokenize(token)\r\n for sub_token in sub_tokens:\r\n tok_to_orig_index.append(i)\r\n all_doc_tokens.append(sub_token)\r\n# code continues...\r\n```\r\n\r\nAs you can see `actual_text` and `cleaned_answer_text` use `example.doc_tokens` and `example.answer_text` which already contain upper_case! `cleaned_answer_text` is searched within `actual_text` considering upper-case letters different from lower_case letters, so an example like _'Mantenere i precetti' vs 'mantenere i precetti'_ (like you told in the issue) would be discarded. Indeed the `tokenizer` hasn't tokenized yet in those lines so, even if the features could be created with lower_case, that check makes that example to be discardes, even if it could be considered!\r\n\r\nSo what I made, is putting a `lower()` on every field of example before passing it to that function, changin [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) in this way:\r\n```python\r\n# other code...\r\nelse:\r\n processor = SquadV2Processor() if args.version_2_with_negative else SquadV1Processor()\r\n if evaluate:\r\n examples = processor.get_dev_examples(args.data_dir, filename=args.predict_file)\r\n else:\r\n examples = processor.get_train_examples(args.data_dir, filename=args.train_file)\r\n \r\n if args.do_lower_case:\r\n logger.info(\"Putting lower case to examples...\")\r\n for example in examples:\r\n example.doc_tokens = [token.lower() for token in example.doc_tokens]\r\n example.question_text = example.question_text.lower()\r\n example.context_text = example.context_text.lower()\r\n if example.answer_text is not None: # for dev set\r\n example.answer_text = example.answer_text.lower()\r\n \r\n features, dataset = squad_convert_examples_to_features(\r\n examples=examples,\r\n tokenizer=tokenizer,\r\n max_seq_length=args.max_seq_length,\r\n doc_stride=args.doc_stride,\r\n max_query_length=args.max_query_length,\r\n is_training=not evaluate,\r\n return_dataset=\"pt\",\r\n threads=args.threads,\r\n )\r\n```\r\n\r\nI don't know if this can improve the results, but it avoids some discarded examples for sure 😊\r\n@LysandreJik, is this a bug 🐛 ? Or maybe was there another trivial method to fix this?",
"Hi @paulthemagno, lowering every example was the same thing I did to solve the warning, although I lowered the dataset instead of editing the run_squad.py; but it is indeed the same thing.\r\n\r\nI uploaded just toady a model on the HF model hub (https://huggingface.co/antoniocappiello/bert-base-italian-uncased-squad-it).\r\n\r\nThis was trained based on `dbmdz/bert-base-italian-uncased`; I tried also a training with Musixmatch Umberto but the F1 and EM were slightly lower (like 1 point percentage lower). \r\n\r\nBut maybe running several experiments with different hyperparameters could lead to better results. "
] | 1,577 | 1,598 | 1,584 | NONE | null | Hello,
I am trying to run run_squad.py for BERT (italian-cased) with an italian version of squad.
During the creation of features from dataset, I got some answer skipped like in the following:
<img width="478" alt="Screenshot 2019-12-30 at 23 30 19" src="https://user-images.githubusercontent.com/26765504/71603304-81081e80-2b5c-11ea-8333-73608e3141a7.png">
Can you tell why is this happening and if this influences the overall accuracy of the training?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2372/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2371/comments | https://api.github.com/repos/huggingface/transformers/issues/2371/events | https://github.com/huggingface/transformers/issues/2371 | 543,864,034 | MDU6SXNzdWU1NDM4NjQwMzQ= | 2,371 | Encounter an "index out of range problem" | {
"login": "Yuejiang-li",
"id": 30067525,
"node_id": "MDQ6VXNlcjMwMDY3NTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/30067525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuejiang-li",
"html_url": "https://github.com/Yuejiang-li",
"followers_url": "https://api.github.com/users/Yuejiang-li/followers",
"following_url": "https://api.github.com/users/Yuejiang-li/following{/other_user}",
"gists_url": "https://api.github.com/users/Yuejiang-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yuejiang-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yuejiang-li/subscriptions",
"organizations_url": "https://api.github.com/users/Yuejiang-li/orgs",
"repos_url": "https://api.github.com/users/Yuejiang-li/repos",
"events_url": "https://api.github.com/users/Yuejiang-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yuejiang-li/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"BTW, I manually downloaded the pretrained model, and save it in dir \"./bert-base-chinese\"",
"@Yuejiang-li \r\nTo me, the error msg indicates that the seq generated after tokenization of \"weibo_content\" has more than 512 tokens. The 512 is the max num of tokens in one seq allowed for the BERT model (embedding layer input size). You have to separate the \"weibo_content\" into at least two shorter sentences and feed them separately.\r\n\r\nYou can print \"tokenized_weibo_content\" to confirm.",
"@FacingBugs \r\nThat's true. I forgot that point... Thank you so much!",
"> That's true. I forgot that point... Thank you so much!\r\n\r\nHi Yuejiang, Can you please share the way you have separated the content? I'm facing the same problem.\r\n\r\nKind regards\r\n\r\n"
] | 1,577 | 1,601 | 1,577 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi!
I'm currently using BERT to obtain sentence embeddings for Chinese text inputs.
Things are fine for most cases. However when I am dealing with this text:
```python
weibo_content = "貌似还是没有完全修复。 http://As.international.anger.grows.over.reports.of.mass.carnage.at.the.hands.of.the.Syrian.regime.a.U.N.Security.Council.draft.resolution.condemning.Syria.failed.to.be.adopted.Saturday.after.vetowielding.members.Russia.and.China.voted.against.it.Ambassadors.from.the.other.permanent.members.of.the.council..the.United.States.France.and.the.United.Kingdom..said.they.were.furious.at.Russia.and.China.for.failing.to.halt.the.worsening.bloody.violence.that.has.consumed.the.Middle.Eastern.nation.Thirteen.Security.Council.members.voted.in.favor.of.the.resolution.The.vote.was.a.major.diplomatic.setback.for.countries.hoping.to.send.a.unified.message.to.embattled.Syrian.President.Bashar.alAssad.and.also.for.opposition.groups.that.look.toward.the.United.Nations.for.support.Those.that.have.blocked.potentially.the.last.effort.to.resolve.this.peacefully.will.have.any.future.blood.spill.on.their.hands.U.S.Ambassador.Susan.Rice.told.CNN.The.people.of.Syria.have.yet.again.been.abandoned.by.this.Council.and.by.the.international.community.Some.Syrians.have.cried.out.for.international.action.to.stop.attacks.on.civilians.more.so.after.opposition.groups.said.at.least.321.civilians.were.killed.and.hundreds.wounded.in.the.city.of.Homs.in.the.past.two.days.The.opposition.Syrian.National.Council.blamed.government.forces.for.the.attack.in.Homs.calling.it.one.of.the.most.horrific.massacres.since.the.start.of.the.Syrian.uprising.Residential.buildings.and.homes.were.randomly.and.heavily.bombed.the.group.said.The.Local.Coordination.Committees.LCC.a.Syrian.opposition.group.said.90.people.had.been.killed.in.Syria.on.Saturday.including.61.in.Homs.10.in.Idlib.and.19.in.a.Damascus.suburb.In.a.bid.to.pressure.the.government.the.group.called.for.a.twoday.civil.strike.to.start.on.Sunday.Another.opposition.group.the.Syrian.Observatory.for.Human.Rights.reported.that.48.people.were.killed.across.Syria.on.Saturday.including.six.army.defectors.and.18.members.of.the.Syrian.security.forces"
```
something goes wrong...
Okay, This text piece contains an url with many English words in it. But, I think using "bert-base-chinese" can still handle this situation.
So the following code goes like:
```python
import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertModel.from_pretrained('./bert-base-chinese')
model.eval()
weibo_content = "[cls]" + weibo_content + "[sep]" # weibo_content is the target text to be extracted embeddings from
tokenized_weibo_content = tokenizer.tokenize(weibo_content)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_weibo_content)
segments_ids = [1] * len(tokenized_weibo_content)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
with torch.no_grad():
encoded_layers, _ = model(tokens_tensor, segments_tensors)
```
In the last step of the above code, I meet with the following problem:
```python
Traceback (most recent call last):
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-12-c7fe4edd73d7>", line 7, in <module>
encoded_layers, _ = model(tokens_tensor, segments_tensors)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\transformers\modeling_bert.py", line 735, in forward
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\transformers\modeling_bert.py", line 187, in forward
position_embeddings = self.position_embeddings(position_ids)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "C:\Users\xzzz\Anaconda3\envs\cfdstorch\lib\site-packages\torch\nn\functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at C:\w\1\s\tmp_conda_3.6_111945\conda\conda-bld\pytorch_1572952852006\work\aten\src\TH/generic/THTensorEvenMoreMath.cpp:418
```
Does anyone know why this happens? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2371/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2370/comments | https://api.github.com/repos/huggingface/transformers/issues/2370/events | https://github.com/huggingface/transformers/issues/2370 | 543,841,417 | MDU6SXNzdWU1NDM4NDE0MTc= | 2,370 | Pipelines: add PoS support | {
"login": "arnaudmiribel",
"id": 7164864,
"node_id": "MDQ6VXNlcjcxNjQ4NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7164864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudmiribel",
"html_url": "https://github.com/arnaudmiribel",
"followers_url": "https://api.github.com/users/arnaudmiribel/followers",
"following_url": "https://api.github.com/users/arnaudmiribel/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudmiribel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudmiribel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudmiribel/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudmiribel/orgs",
"repos_url": "https://api.github.com/users/arnaudmiribel/repos",
"events_url": "https://api.github.com/users/arnaudmiribel/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudmiribel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"We now have a more general `TokenClassificationPipeline`, @arnaudmiribel (this is just an alias to the `NerPipeline`)"
] | 1,577 | 1,583 | 1,583 | CONTRIBUTOR | null | ## 🚀 Feature
As `Pipelines` were recently added for many tasks including NER, Sentiment Analysis, it'd be great to also enable Part-of-Speech tagging.
## Motivation
PoS tagging is a very useful task, and often used as an evaluating downstream task for new models.
## Additional context
Current available tasks for `Pipelines` are described [here](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L831).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2370/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2369/comments | https://api.github.com/repos/huggingface/transformers/issues/2369/events | https://github.com/huggingface/transformers/pull/2369 | 543,832,095 | MDExOlB1bGxSZXF1ZXN0MzU4MDcyNDk3 | 2,369 | few changes due to the torch version inconsistency in summarization example | {
"login": "junxu-ai",
"id": 11970592,
"node_id": "MDQ6VXNlcjExOTcwNTky",
"avatar_url": "https://avatars.githubusercontent.com/u/11970592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junxu-ai",
"html_url": "https://github.com/junxu-ai",
"followers_url": "https://api.github.com/users/junxu-ai/followers",
"following_url": "https://api.github.com/users/junxu-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/junxu-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junxu-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junxu-ai/subscriptions",
"organizations_url": "https://api.github.com/users/junxu-ai/orgs",
"repos_url": "https://api.github.com/users/junxu-ai/repos",
"events_url": "https://api.github.com/users/junxu-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/junxu-ai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | This small change intends to fix the issue #2297.
it's generally a version inconsistent issue.
in ver 1.1.0, torch.gt outputs:
_torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[ 0, 1],
[ 0, 0]], dtype=torch.uint8)_
while in ver 1.2.0, it outputs:
_torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, True], [False, True]])_
Thus, I added a version checking function, and revise the tensor type in tensor.gt() | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2369/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2369",
"html_url": "https://github.com/huggingface/transformers/pull/2369",
"diff_url": "https://github.com/huggingface/transformers/pull/2369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2369.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2368/comments | https://api.github.com/repos/huggingface/transformers/issues/2368/events | https://github.com/huggingface/transformers/issues/2368 | 543,778,396 | MDU6SXNzdWU1NDM3NzgzOTY= | 2,368 | Clarification regarding past/layer_past in GPT-2 | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"the `past` variable in GTP-2 stores all previously computed key and value vectors. Because GPT-2 uses masked self-attention only the query vectors of previous tokens are updated at every step, but not the key and value vectors. Therefore the `past` variable can be used to speed up decoding. \r\n\r\nTo better understand how GPT-2 works, I highly recommend reading [the Illustrated GPT-2](http://jalammar.github.io/illustrated-gpt2) especially the part: \r\n\r\n**The Illustrated Masked Self-Attention**",
"Thanks for the response! \r\n\r\nI'm still not sure this is intuitive for me. The linked article mentions: \"Now in the next iteration, when the model processes the word robot, it does not need to generate query, key, and value queries for the a token. It just reuses the ones it saved from the first iteration:\", which seems to imply Q, K and V are reused, whereas it seems we're only (optionally) reusing K and V. \r\n\r\nOn the other hand, I'm not seeing where in the code that masked self-attention only affected the query vectors. It seems to be that attention masking is applied to the scoring vectors at each layer, and that should affect the generated Q/K/V for all downstream layers.\r\n\r\nIt feels like there may be some key part of the intuition I'm missing, so thanks for the help.",
"I read the article @patrickvonplaten pointed. I am still very confused about the usage of attention when there are \"past\" vectors. The dimension just doesn't match.\r\n\r\n19 is the batch size here; I am using gpt-2 small. 40 is the encoding seq len; 23 is the decoding seq len.\r\nInput document past[0] size: torch.Size([2, 19, 12, 40, 64])\r\nto-decode-sequence input embedding: torch.Size([19, 23, 768])\r\nmask of the input document: torch.Size([19, 40])\r\nmask of the to-decode-sequence: torch.Size([19, 23])\r\nconcat of two masks: torch.Size([19, 63]) // 63=23+40\r\n\r\nconcat doesn't work // _decoder_output = self.transformer(past=presents,attention_mask=concat_attention_mask, inputs_embeds=gt_input_embedding) fails\r\n\r\nto-decode-mask doesn't work // _decoder_output = self.transformer(past=presents,attention_mask=partial_oracle_trajectory_mask,inputs_embeds=gt_input_embedding ) fails\r\n\r\n\r\nError message for concat:\r\n` attention_mask = attention_mask.view(-1, input_shape[-1])\r\nRuntimeError: shape '[-1, 23]' is invalid for input of size 1197`\r\n\r\nError message for to-decode-mask only\r\n`RuntimeError: The size of tensor a (63) must match the size of tensor b (23) at non-singleton dimension 3\r\n`\r\n\r\n@zphang any idea? not sure if this correlates with what you said tho. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @jiacheng-xu - sorry for answering that late. I think the problem with the attention mask was recently fixed in PR #3033. The issue was also mentioned in #3031 I think. Let me know if you still have problems when using the current version of master."
] | 1,577 | 1,584 | 1,584 | CONTRIBUTOR | null | ## ❓ Questions & Help
I'm hoping to get some clarification regarding how past/layer_past are meant to work in GPT-2. My prior impression was that the query/key/value at every layer (other than the first) should be influenced by all tokens the model is able to see. As such, it shouldn't make much sense to use pre-computed key/value activations from prior steps. Could you clarify this issue?
(Hope my question makes sense.) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2367/comments | https://api.github.com/repos/huggingface/transformers/issues/2367/events | https://github.com/huggingface/transformers/issues/2367 | 543,759,668 | MDU6SXNzdWU1NDM3NTk2Njg= | 2,367 | Load the google bert model(ckpt) from TFBertForPreTraining error | {
"login": "zwqjoy",
"id": 12653212,
"node_id": "MDQ6VXNlcjEyNjUzMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12653212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zwqjoy",
"html_url": "https://github.com/zwqjoy",
"followers_url": "https://api.github.com/users/zwqjoy/followers",
"following_url": "https://api.github.com/users/zwqjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zwqjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zwqjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zwqjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zwqjoy/orgs",
"repos_url": "https://api.github.com/users/zwqjoy/repos",
"events_url": "https://api.github.com/users/zwqjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zwqjoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | I want use the google chinese bert ckpt model in transforms, and env use tf2.
the transforms can load the ckpt into pytorch model
But I want load the ckpt into tf.keras model, How can I do this ?

model = BertForPreTraining.from_pretrained(checkpoint_path, from_tf=True, config=config)
Success.
But:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2367/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2366/comments | https://api.github.com/repos/huggingface/transformers/issues/2366/events | https://github.com/huggingface/transformers/issues/2366 | 543,727,652 | MDU6SXNzdWU1NDM3Mjc2NTI= | 2,366 | How Can I load the google bert model(ckpt)? | {
"login": "zwqjoy",
"id": 12653212,
"node_id": "MDQ6VXNlcjEyNjUzMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12653212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zwqjoy",
"html_url": "https://github.com/zwqjoy",
"followers_url": "https://api.github.com/users/zwqjoy/followers",
"following_url": "https://api.github.com/users/zwqjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zwqjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zwqjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zwqjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zwqjoy/orgs",
"repos_url": "https://api.github.com/users/zwqjoy/repos",
"events_url": "https://api.github.com/users/zwqjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zwqjoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | TF2.0
## ❓ Questions & Help
import os
pretrained_path = 'chinese_L-12_H-768_A-12'
config_path = os.path.join(pretrained_path, 'bert_config.json')
checkpoint_path = os.path.join(pretrained_path, 'bert_model.ckpt')
vocab_path = os.path.join(pretrained_path, 'vocab.txt')
config = BertConfig.from_json_file(config_path)
model = TFBertForPreTraining.from_pretrained(checkpoint_path, config=config)

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2365/comments | https://api.github.com/repos/huggingface/transformers/issues/2365/events | https://github.com/huggingface/transformers/issues/2365 | 543,575,714 | MDU6SXNzdWU1NDM1NzU3MTQ= | 2,365 | upgrading new transformer doesn't work | {
"login": "ehsan-soe",
"id": 12740904,
"node_id": "MDQ6VXNlcjEyNzQwOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/12740904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehsan-soe",
"html_url": "https://github.com/ehsan-soe",
"followers_url": "https://api.github.com/users/ehsan-soe/followers",
"following_url": "https://api.github.com/users/ehsan-soe/following{/other_user}",
"gists_url": "https://api.github.com/users/ehsan-soe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehsan-soe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehsan-soe/subscriptions",
"organizations_url": "https://api.github.com/users/ehsan-soe/orgs",
"repos_url": "https://api.github.com/users/ehsan-soe/repos",
"events_url": "https://api.github.com/users/ehsan-soe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehsan-soe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Previously you were relying on `transformers` being implicitly added to `PYTHONPATH` when you were working from the source of the repository. This breaks if you move to another directory, like `examples`.\r\n\r\n`pip install .` makes `transformers` available in your virtualenv regardless of where you're working.\r\n\r\nIt's unclear to me why the installation is stuck.\r\n\r\nCould you run the following commands?\r\n\r\n```\r\npip uninstall transformers\r\npip --version\r\npip install --verbose .\r\n```\r\n",
"Thanks for replying @aaugustin .\r\nmy pip version is 19.1.1 \r\nHowever, I ended up cloning the whole repo again and install fresh.\r\nI will close it for now.",
"OK, that was going to be my next suggestion if the situation didn't improve!"
] | 1,577 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
Hi,
I have pulled the repo again since a lot of stuff changed/added.
When I try to use ```pip install --upgrade .``` command, nothing changes and I am stuck in the following step forever:
```
(py36) pytorch-transformers$ pip install --upgrade .
Processing /home/pytorch-transformers
```
Plus, since a lot of folders are renamed and scripts moved to ```src``` folder, whenI try to do ```from transformers import BertTokenizer, BertModel, BertForMaskedLM```
I get following error:
```
ImportError Traceback (most recent call last)
<ipython-input-1-34ecfe73cb1a> in <module>
1 import torch
----> 2 from transformers import BertTokenizer, BertModel, BertForMaskedLM, BertConfig, BertForPreTraining, BertConfig
3
4 # OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
5 import logging
ImportError: cannot import name 'BertTokenizer' from 'transformers' (unknown location)
```
Would you please help with this? What is the reason to move to src? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2365/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2364/comments | https://api.github.com/repos/huggingface/transformers/issues/2364/events | https://github.com/huggingface/transformers/issues/2364 | 543,507,442 | MDU6SXNzdWU1NDM1MDc0NDI= | 2,364 | How to fine-tune PreTrainedEncoderDecoder on new dataset? | {
"login": "fabrahman",
"id": 22799593,
"node_id": "MDQ6VXNlcjIyNzk5NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/22799593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabrahman",
"html_url": "https://github.com/fabrahman",
"followers_url": "https://api.github.com/users/fabrahman/followers",
"following_url": "https://api.github.com/users/fabrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/fabrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabrahman/subscriptions",
"organizations_url": "https://api.github.com/users/fabrahman/orgs",
"repos_url": "https://api.github.com/users/fabrahman/repos",
"events_url": "https://api.github.com/users/fabrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabrahman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Seconded. Is there a way to fine-tune any seq2seq model in huggingface?",
"@Josh-Payne You can have a look at https://github.com/huggingface/transformers/blob/9df74b8bc42eedc496f7148b9370728054ca3b6a/src/transformers/modeling_encoder_decoder.py",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,590 | 1,590 | NONE | null | ## ❓ Questions & Help
Hi,
Many thanks for your recent work implementing [this paper](https://arxiv.org/pdf/1907.12461.pdf).
I wonder if you have or can provide script and documentation for fine-tuning PreTrainedEncoderDecoder on new dataset?
Many thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2363/comments | https://api.github.com/repos/huggingface/transformers/issues/2363/events | https://github.com/huggingface/transformers/issues/2363 | 543,473,827 | MDU6SXNzdWU1NDM0NzM4Mjc= | 2,363 | Finetuning on several tasks | {
"login": "paul-you",
"id": 23263212,
"node_id": "MDQ6VXNlcjIzMjYzMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/23263212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paul-you",
"html_url": "https://github.com/paul-you",
"followers_url": "https://api.github.com/users/paul-you/followers",
"following_url": "https://api.github.com/users/paul-you/following{/other_user}",
"gists_url": "https://api.github.com/users/paul-you/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paul-you/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paul-you/subscriptions",
"organizations_url": "https://api.github.com/users/paul-you/orgs",
"repos_url": "https://api.github.com/users/paul-you/repos",
"events_url": "https://api.github.com/users/paul-you/events{/privacy}",
"received_events_url": "https://api.github.com/users/paul-you/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The `BertForSequenceClassification` model has a classifier head transforming Bert output into `num_labels` output. So if you change the classification output, it can't be loaded as you could see.\r\nThe only hack you could do is to load the fine-tuned model with previous `num_labels`, then remove classifier head and replace it with a new classifier with your new `num_labels` (re-initialized) and fine-tune again.\r\nYet, your Bert model was fine-tuned on a first classification task that is not the same as the second classification task. So if both tasks are too different on too different datasets, it's not sure it will learn anything... or not sure your previous head will give anything decent after that...\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello,
Is it possible to finetune a Transformer on some dataset, and then finetune the model again on another dataset with a different number of output labels ?
I tried this and got the following error:
```
RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([3]).
```
Thanks in advance
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2363/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2362/comments | https://api.github.com/repos/huggingface/transformers/issues/2362/events | https://github.com/huggingface/transformers/issues/2362 | 543,436,736 | MDU6SXNzdWU1NDM0MzY3MzY= | 2,362 | Why albert has a print statement during forward? | {
"login": "ChristofHenkel",
"id": 24292431,
"node_id": "MDQ6VXNlcjI0MjkyNDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/24292431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristofHenkel",
"html_url": "https://github.com/ChristofHenkel",
"followers_url": "https://api.github.com/users/ChristofHenkel/followers",
"following_url": "https://api.github.com/users/ChristofHenkel/following{/other_user}",
"gists_url": "https://api.github.com/users/ChristofHenkel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChristofHenkel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChristofHenkel/subscriptions",
"organizations_url": "https://api.github.com/users/ChristofHenkel/orgs",
"repos_url": "https://api.github.com/users/ChristofHenkel/repos",
"events_url": "https://api.github.com/users/ChristofHenkel/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChristofHenkel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This was an issue with a previous version of transformers. Please upgrade it to a more recent version for the warning to go away. Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
both AlbertTransformer and AlbertLayerGroup have print statements in the forward method, which messes up logging / printing during training
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2361/comments | https://api.github.com/repos/huggingface/transformers/issues/2361/events | https://github.com/huggingface/transformers/pull/2361 | 543,378,999 | MDExOlB1bGxSZXF1ZXN0MzU3NjYwNTEw | 2,361 | Improve logging message in feature conversion functions | {
"login": "simonepri",
"id": 3505087,
"node_id": "MDQ6VXNlcjM1MDUwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3505087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonepri",
"html_url": "https://github.com/simonepri",
"followers_url": "https://api.github.com/users/simonepri/followers",
"following_url": "https://api.github.com/users/simonepri/following{/other_user}",
"gists_url": "https://api.github.com/users/simonepri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonepri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonepri/subscriptions",
"organizations_url": "https://api.github.com/users/simonepri/orgs",
"repos_url": "https://api.github.com/users/simonepri/repos",
"events_url": "https://api.github.com/users/simonepri/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonepri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=h1) Report\n> Merging [#2361](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f75bf05ce6a05ef316363de129c29f2e00cacd7b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2361 +/- ##\n=======================================\n Coverage 73.23% 73.23% \n=======================================\n Files 87 87 \n Lines 14985 14985 \n=======================================\n Hits 10975 10975 \n Misses 4010 4010\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `19.6% <0%> (ø)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/2361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `27.86% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=footer). Last update [f75bf05...dc69c5c](https://codecov.io/gh/huggingface/transformers/pull/2361?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks @simonepri "
] | 1,577 | 1,578 | 1,578 | NONE | null | This PR adds the total number of examples to process to the log message produced during the feature conversion step. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2361",
"html_url": "https://github.com/huggingface/transformers/pull/2361",
"diff_url": "https://github.com/huggingface/transformers/pull/2361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2361.patch",
"merged_at": 1578318877000
} |
https://api.github.com/repos/huggingface/transformers/issues/2360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2360/comments | https://api.github.com/repos/huggingface/transformers/issues/2360/events | https://github.com/huggingface/transformers/issues/2360 | 543,340,652 | MDU6SXNzdWU1NDMzNDA2NTI= | 2,360 | CTRL - RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' | {
"login": "GuyTevet",
"id": 24757373,
"node_id": "MDQ6VXNlcjI0NzU3Mzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/24757373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuyTevet",
"html_url": "https://github.com/GuyTevet",
"followers_url": "https://api.github.com/users/GuyTevet/followers",
"following_url": "https://api.github.com/users/GuyTevet/following{/other_user}",
"gists_url": "https://api.github.com/users/GuyTevet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuyTevet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuyTevet/subscriptions",
"organizations_url": "https://api.github.com/users/GuyTevet/orgs",
"repos_url": "https://api.github.com/users/GuyTevet/repos",
"events_url": "https://api.github.com/users/GuyTevet/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuyTevet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Upgrading torch to 1.3.1 solves the issue",
"I have the same problem even with torch==1.3.1\r\n\r\nI think this should be re-opened",
"I have the same issue for generating with gpt2 \r\n\r\nHere is the error log:\r\n```\r\n File \"run_generation.py\", line 236, in <module>\r\n main()\r\n File \"run_generation.py\", line 222, in main\r\n repetition_penalty=args.repetition_penalty,\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/autograd/grad_mode.py\", line 49, in decorate_no_grad\r\n return func(*args, **kwargs)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 744, in generate\r\n effective_batch_size,\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 775, in _generate_no_beam_search\r\n outputs = self(**model_inputs)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 589, in forward\r\n inputs_embeds=inputs_embeds,\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 456, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/functional.py\", line 1484, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select\r\n```\r\n\r\nI have torch 1.3.0 installed.",
"@ehsan-soe check my last PR #2377, solves the issue. \r\n\r\n",
"@alberduris Thanks 👍 ",
"This seems to be an issue with transformers 2.3.0, as I was able to run the generation code successfully by checkout tag v2.2.2",
"add device and assign the model to it\r\n```\r\n...\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\nmodel.to(device)\r\n...\r\n```\r\nassign also the tensor to the device\r\n```\r\n...\r\nsentence = 'Today, scientists confirmed the worst possible outcome: the massive asteroid will collide with Earth'\r\ncontext_tokens = tokenizer.encode(sentence, add_special_tokens=False)\r\ncontext = torch.tensor(context_tokens, dtype=torch.long)\r\ncontext = context.to(device)\r\n...\r\n```\r\n"
] | 1,577 | 1,582 | 1,577 | NONE | null | ## 🐛 Bug
Hi,
The error
`RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'`
arise while running CTRL using examples/run_generation.py
Model I am using (Bert, XLNet....): **CTRL**
Language I am using the model on (English, Chinese....): **English**
The problem arise when using:
running CTRL using run_generation.py
`python examples/run_generation.py --model_type ctrl --model_name ctrl --temperature 0.2 --repetition 1.2`
Full trace:
```
Traceback (most recent call last):
File "examples/run_generation.py", line 236, in <module>
main()
File "examples/run_generation.py", line 222, in main
repetition_penalty=args.repetition_penalty,
File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 43, in decorate_no_grad
return func(*args, **kwargs)
File "/media/disk1/guytevet/transformers/src/transformers/modeling_utils.py", line 744, in generate
effective_batch_size,
File "/media/disk1/guytevet/transformers/src/transformers/modeling_utils.py", line 775, in _generate_no_beam_search
outputs = self(**model_inputs)
File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/media/disk1/guytevet/transformers/src/transformers/modeling_ctrl.py", line 520, in forward
inputs_embeds=inputs_embeds,
File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/media/disk1/guytevet/transformers/src/transformers/modeling_ctrl.py", line 388, in forward
inputs_embeds = self.w(input_ids)
File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/media/disk1/guytevet/venvs/py3/lib/python3.6/site-packages/torch/nn/functional.py", line 1454, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
```
## Environment
* OS: Ubuntu 18.04.2
* Python version: 3.6
* PyTorch version: 1.0.1.post2
* PyTorch Transformers version (or branch): master, installed from source
-e git+https://github.com/huggingface/transformers.git@f75bf05ce6a05ef316363de129c29f2e00cacd7b#egg=transformers
* Using GPU ? Yes
* Distributed of parallel setup ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2359/comments | https://api.github.com/repos/huggingface/transformers/issues/2359/events | https://github.com/huggingface/transformers/issues/2359 | 543,336,794 | MDU6SXNzdWU1NDMzMzY3OTQ= | 2,359 | Confusion about the target_mapping parameter of the xlnet model | {
"login": "neoql",
"id": 25402103,
"node_id": "MDQ6VXNlcjI1NDAyMTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25402103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neoql",
"html_url": "https://github.com/neoql",
"followers_url": "https://api.github.com/users/neoql/followers",
"following_url": "https://api.github.com/users/neoql/following{/other_user}",
"gists_url": "https://api.github.com/users/neoql/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neoql/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoql/subscriptions",
"organizations_url": "https://api.github.com/users/neoql/orgs",
"repos_url": "https://api.github.com/users/neoql/repos",
"events_url": "https://api.github.com/users/neoql/events{/privacy}",
"received_events_url": "https://api.github.com/users/neoql/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | Why code at https://github.com/huggingface/transformers/blob/f75bf05ce6a05ef316363de129c29f2e00cacd7b/src/transformers/modeling_xlnet.py#L1029 is ` target_mapping[0, 0, -1] = 1.0`, i think it should be ` target_mapping[:, :, -1] = 1.0`
And I‘m confused about `taget_mapping` parameter, What is the difference between it and the `perm_mask` parameter?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2359/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2358/comments | https://api.github.com/repos/huggingface/transformers/issues/2358/events | https://github.com/huggingface/transformers/issues/2358 | 543,331,776 | MDU6SXNzdWU1NDMzMzE3NzY= | 2,358 | Quickstart BERT Example: Assertion Error | {
"login": "rayedbw",
"id": 4649183,
"node_id": "MDQ6VXNlcjQ2NDkxODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4649183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayedbw",
"html_url": "https://github.com/rayedbw",
"followers_url": "https://api.github.com/users/rayedbw/followers",
"following_url": "https://api.github.com/users/rayedbw/following{/other_user}",
"gists_url": "https://api.github.com/users/rayedbw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rayedbw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayedbw/subscriptions",
"organizations_url": "https://api.github.com/users/rayedbw/orgs",
"repos_url": "https://api.github.com/users/rayedbw/repos",
"events_url": "https://api.github.com/users/rayedbw/events{/privacy}",
"received_events_url": "https://api.github.com/users/rayedbw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not quite sure what happened. Restarted my kernel after installing packages from ```examples/requirements.txt``` and is fixed. Closing the issue."
] | 1,577 | 1,577 | 1,577 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arises when:
* I run the official BERT Example in my local Jupyter Lab environment: Copy pasted the code and ran it in one cell.
## To Reproduce
Steps to reproduce the behavior:
1. Download and install Transformers from source
2. Start Jupyter lab
3. Copy paste and run the Quickstart BERT Example
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Expected the assertion to pass.
## Environment
* OS: Oracle Linux Server 7.7
* Python version: 3.7.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): master
* Using GPU ? No
* Distributed of parallel setup ? No
## Additional context
<!-- Add any other context about the problem here. -->

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2357/comments | https://api.github.com/repos/huggingface/transformers/issues/2357/events | https://github.com/huggingface/transformers/issues/2357 | 543,311,525 | MDU6SXNzdWU1NDMzMTE1MjU= | 2,357 | GLUE benchmark score for XLNet_base_cased? | {
"login": "CapGOGO",
"id": 15892793,
"node_id": "MDQ6VXNlcjE1ODkyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/15892793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CapGOGO",
"html_url": "https://github.com/CapGOGO",
"followers_url": "https://api.github.com/users/CapGOGO/followers",
"following_url": "https://api.github.com/users/CapGOGO/following{/other_user}",
"gists_url": "https://api.github.com/users/CapGOGO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CapGOGO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CapGOGO/subscriptions",
"organizations_url": "https://api.github.com/users/CapGOGO/orgs",
"repos_url": "https://api.github.com/users/CapGOGO/repos",
"events_url": "https://api.github.com/users/CapGOGO/events{/privacy}",
"received_events_url": "https://api.github.com/users/CapGOGO/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you looked at the [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) script ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
Can someone provide the GLUE benchmark scores for different GLUE tasks. Or a script for preforming predictions on the test files will be really helpful.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2357/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2356/comments | https://api.github.com/repos/huggingface/transformers/issues/2356/events | https://github.com/huggingface/transformers/pull/2356 | 543,234,113 | MDExOlB1bGxSZXF1ZXN0MzU3NTQzMzc5 | 2,356 | GPT2 should not store/compute cached activations during finetuning | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not sure which size of GPT-2 you're testing with, but the 355M version utilizes gradient checkpointing for finetuning in gpt-2-simple, which is not the case with the 124M version w/ Transformers.\r\n\r\nThat might be a useful test case.",
"I just tried this with gpt-2-medium on my poetry dataset and have the same memory error as before. Complete info below:\r\n````\r\npython run_lm_finetuning.py --output_dir=output --model_type=gpt2 --model_name_or_path=gpt2-medium --do_train --train_data_file=all_gen_lines.txt --per_gpu_train_batch_size=1\r\n12/29/2019 17:48:47 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False\r\n12/29/2019 17:48:47 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-config.json from cache at /home/jupyter/.cache/torch/transformers/98aa65385e18b0efd17acd8bf64dcdf21406bb0c99c801c2d3c9f6bfd1f48f29.5f9150c569dadadaa1e66830d29254aa5cf43f8ccd76dc0c81e0102c67032367\r\n12/29/2019 17:48:47 - INFO - transformers.configuration_utils - Model config {\r\n \"attn_pdrop\": 0.1,\r\n \"embd_pdrop\": 0.1,\r\n \"finetuning_task\": null,\r\n \"initializer_range\": 0.02,\r\n \"is_decoder\": false,\r\n \"layer_norm_epsilon\": 1e-05,\r\n \"n_ctx\": 1024,\r\n \"n_embd\": 1024,\r\n \"n_head\": 16,\r\n \"n_layer\": 24,\r\n \"n_positions\": 1024,\r\n \"n_special\": 0,\r\n \"num_labels\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"predict_special_tokens\": true,\r\n \"pruned_heads\": {},\r\n \"resid_pdrop\": 0.1,\r\n \"summary_activation\": null,\r\n \"summary_first_dropout\": 0.1,\r\n \"summary_proj_to_labels\": true,\r\n \"summary_type\": \"cls_index\",\r\n \"summary_use_proj\": true,\r\n \"torchscript\": false,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 50257\r\n}\r\n\r\n12/29/2019 17:48:48 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-vocab.json from cache at /home/jupyter/.cache/torch/transformers/f20f05d3ae37c4e3cd56764d48e566ea5adeba153dcee6eb82a18822c9c731ec.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71\r\n12/29/2019 17:48:48 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-merges.txt from cache at /home/jupyter/.cache/torch/transformers/6d882670c55563617571fe0c97df88626fb5033927b40fc18a8acf98dafd4946.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda\r\n12/29/2019 17:48:48 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-pytorch_model.bin from cache at /home/jupyter/.cache/torch/transformers/4b337a4f3b7d3e1518f799e238af607498c02938a3390152aaec7d4dabca5a02.8769029be4f66a5ae1055eefdd1d11621b901d510654266b8681719fff492d6e\r\n12/29/2019 17:49:02 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1024, cache_dir='', config_name='', device=device(type='cuda'), do_eval=False, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file=None, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=False, mlm_probability=0.15, model_name_or_path='gpt2-medium', model_type='gpt2', n_gpu=2, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=1, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='all_gen_lines.txt', warmup_steps=0, weight_decay=0.0)\r\n12/29/2019 17:49:02 - INFO - __main__ - Loading features from cached file gpt2-medium_cached_lm_1024_all_gen_lines.txt.bin\r\n12/29/2019 17:49:02 - INFO - __main__ - ***** Running training *****\r\n12/29/2019 17:49:02 - INFO - __main__ - Num examples = 2061\r\n12/29/2019 17:49:02 - INFO - __main__ - Num Epochs = 1\r\n12/29/2019 17:49:02 - INFO - __main__ - Instantaneous batch size per GPU = 1\r\n12/29/2019 17:49:02 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2\r\n12/29/2019 17:49:02 - INFO - __main__ - Gradient Accumulation steps = 1\r\n12/29/2019 17:49:02 - INFO - __main__ - Total optimization steps = 1031\r\nEpoch: 0%| | 0/1 [00:00<?, ?it/s/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n Traceback (most recent call last): | 1/1031 [00:05<1:30:32, 5.27s/it]\r\n File \"run_lm_finetuning.py\", line 717, in <module>\r\n main()\r\n File \"run_lm_finetuning.py\", line 667, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_lm_finetuning.py\", line 298, in train\r\n outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 152, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 162, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in parallel_apply\r\n output.reraise()\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/_utils.py\", line 385, in reraise\r\n raise self.exc_type(msg)\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 549, in forward\r\n inputs_embeds=inputs_embeds)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 460, in forward\r\n head_mask=head_mask[i])\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 236, in forward\r\n m = self.mlp(self.ln_2(x))\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 214, in forward\r\n h = self.act(self.c_fc(x))\r\n File \"/home/jupyter/miniconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 100, in gelu\r\n return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))\r\nRuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.17 GiB total capacity; 10.77 GiB already allocated; 14.06 MiB free; 66.92 MiB cached)\r\n\r\nEpoch: 0%| | 0/1 [00:05<?, ?it/s]\r\nIteration: 0%| | 1/1031 [00:05<1:41:31, 5.91s/it]\r\n(base) jupyter@lynn-ukpavilion:~/code/transformers/examples$ nvidia-smi\r\nSun Dec 29 17:49:34 2019\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\r\n| N/A 49C P0 71W / 149W | 0MiB / 11441MiB | 0% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 1 Tesla K80 Off | 00000000:00:05.0 Off | 0 |\r\n| N/A 67C P0 88W / 149W | 0MiB / 11441MiB | 94% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| No running processes found |\r\n+-----------------------------------------------------------------------------+\r\n(base) jupyter@lynn-ukpavilion:~/code/transformers/examples$\r\n(base) jupyter@lynn-ukpavilion:~/code/transformers/examples$ git status\r\nOn branch fix-gpt2-finetuning-memory\r\n````",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@thomwolf @LysandreJik : Curious about the status of this. It seems like the memory issues still exists with \"run_lm_finetuning.py\" and GPT-2. For instance, even a batch size of 1 doesn't help prevent OOM error when fine-tuning GPT-2 large with a sequence length of 1024 (despite using FP-16). Is there anything we could do here (apart from gradient checkpointing) that would make the memory usage lower as Thomas listed in his first comment above? Thanks."
] | 1,577 | 1,651 | 1,583 | MEMBER | null | This PR tries to fix the issue with large memory usage from GPT2 during fine-tuning.
## Quick estimations
@LysandreJik compared memory usage with @minimaxir GPT2-simple (https://github.com/minimaxir/gpt-2-simple):
*Small model*, batch size 4, sequence length 512 (roughly similar):
- us => 9.9GB,
- GPT2-simple => 8.5GB
Increasing to a 1024 length:
- us => 20.4GB...,
- GPT2-simple => still 8.5GB
*Medium model*, batch size de 4, sequence length de 512
- us => 23.36GB. OOM on a titan with 1024 seq len.
- GPT2-simple throws an error related to layers not contained in the checkpoint
## Possible reason
Investigating our `run_lm_finetuning` script and GPT2 model showed that we are alway computing/storing cached hidden-states (which are normally only useful for decoding).
This PR attempt to fix this most probable source of large memory usage.
It cleans up a little bit GPT2 codebase at the same time.
I haven't tried it yet on a large scale test.
cc @LysandreJik @arnicas | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2356/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2356",
"html_url": "https://github.com/huggingface/transformers/pull/2356",
"diff_url": "https://github.com/huggingface/transformers/pull/2356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2356.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2355/comments | https://api.github.com/repos/huggingface/transformers/issues/2355/events | https://github.com/huggingface/transformers/issues/2355 | 543,087,068 | MDU6SXNzdWU1NDMwODcwNjg= | 2,355 | transformers command not found after installing transformers using pip | {
"login": "ManasRMohanty",
"id": 42920503,
"node_id": "MDQ6VXNlcjQyOTIwNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/42920503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManasRMohanty",
"html_url": "https://github.com/ManasRMohanty",
"followers_url": "https://api.github.com/users/ManasRMohanty/followers",
"following_url": "https://api.github.com/users/ManasRMohanty/following{/other_user}",
"gists_url": "https://api.github.com/users/ManasRMohanty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManasRMohanty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManasRMohanty/subscriptions",
"organizations_url": "https://api.github.com/users/ManasRMohanty/orgs",
"repos_url": "https://api.github.com/users/ManasRMohanty/repos",
"events_url": "https://api.github.com/users/ManasRMohanty/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManasRMohanty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have this problem also. \r\nInstalled with pip3 (maybe this is the necessary information)",
"Hi @ManasRMohanty, @DaniilRoman,\r\n\r\nIn 2.3.0 we introduced some new commands from the cli, which are now provided through **transformers-cli**.\r\n\r\nCan you please try the following: \r\n\r\n```bash\r\ntransformers-cli convert --model_type <model_type> --tf_checkpoint /path/to/tf_model.ckpt --config /path/to/model.json --pytorch_dump_output /path/to/pytorch_model.bin\r\n```\r\n\r\nLet us know :) ",
"@mfuntowicz Do you want to update the doc? (i can do it too if needed)",
"@mfuntowicz @julien-c \r\nYes, the above worked for me in linux. Thank you.\r\n\r\nAlso, I checked in https://huggingface.co/transformers/converting_tensorflow_models.html and I can see that the document is updated, but the convert parameter is missing there, so please update that.",
"@ManasRMohanty I've updated the documentation with the missing keyword. Thanks for reporting 👍 "
] | 1,577 | 1,578 | 1,577 | NONE | null | I wanted to convert TF checkpoints to pytorch saved files and thus I followed instructions as mentioned in the link https://huggingface.co/transformers/converting_tensorflow_models.html
Thus, I installed PyTorch, Tensorflow and then transformers. But after doing so, when I ran the command, my system prompted me transformers command not found.
What can be the issue here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2355/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2354/comments | https://api.github.com/repos/huggingface/transformers/issues/2354/events | https://github.com/huggingface/transformers/pull/2354 | 543,066,120 | MDExOlB1bGxSZXF1ZXN0MzU3MzkxODk0 | 2,354 | [debug] Debug Heisenbug, the old school way. | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=h1) Report\n> Merging [#2354](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bfe870be654a1fc54c5479f9ad0875492d9cd959?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2354 +/- ##\n=======================================\n Coverage 73.32% 73.32% \n=======================================\n Files 87 87 \n Lines 14964 14964 \n=======================================\n Hits 10972 10972 \n Misses 3992 3992\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=footer). Last update [bfe870b...c8c4ecd](https://codecov.io/gh/huggingface/transformers/pull/2354?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2354/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2354",
"html_url": "https://github.com/huggingface/transformers/pull/2354",
"diff_url": "https://github.com/huggingface/transformers/pull/2354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2354.patch",
"merged_at": 1577632042000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2353/comments | https://api.github.com/repos/huggingface/transformers/issues/2353/events | https://github.com/huggingface/transformers/pull/2353 | 543,064,260 | MDExOlB1bGxSZXF1ZXN0MzU3MzkwMzE3 | 2,353 | [http] Tweak http user-agent | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2353",
"html_url": "https://github.com/huggingface/transformers/pull/2353",
"diff_url": "https://github.com/huggingface/transformers/pull/2353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2353.patch",
"merged_at": 1577632011000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2352/comments | https://api.github.com/repos/huggingface/transformers/issues/2352/events | https://github.com/huggingface/transformers/pull/2352 | 543,061,144 | MDExOlB1bGxSZXF1ZXN0MzU3Mzg3NjAz | 2,352 | Cli tweaks | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome!"
] | 1,577 | 1,580 | 1,577 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2352",
"html_url": "https://github.com/huggingface/transformers/pull/2352",
"diff_url": "https://github.com/huggingface/transformers/pull/2352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2352.patch",
"merged_at": 1577544001000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2351/comments | https://api.github.com/repos/huggingface/transformers/issues/2351/events | https://github.com/huggingface/transformers/issues/2351 | 543,026,761 | MDU6SXNzdWU1NDMwMjY3NjE= | 2,351 | GLUE Benchmark Hyperparameters | {
"login": "shreydesai",
"id": 12023280,
"node_id": "MDQ6VXNlcjEyMDIzMjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/12023280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shreydesai",
"html_url": "https://github.com/shreydesai",
"followers_url": "https://api.github.com/users/shreydesai/followers",
"following_url": "https://api.github.com/users/shreydesai/following{/other_user}",
"gists_url": "https://api.github.com/users/shreydesai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shreydesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shreydesai/subscriptions",
"organizations_url": "https://api.github.com/users/shreydesai/orgs",
"repos_url": "https://api.github.com/users/shreydesai/repos",
"events_url": "https://api.github.com/users/shreydesai/events{/privacy}",
"received_events_url": "https://api.github.com/users/shreydesai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, each result available on the [example page](https://huggingface.co/transformers/examples.html) shows the command that was used, displaying the hyper-parameters that are different from the defaults.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
In the `run_glue.py` script, are the hyperparameters for running BERT, RoBERTa, ALBERT, etc. the exact same? The documentation does not seem to outline separate hyperparameters, but the papers of each respective model show different hyperparameter ranges. I'm wondering if this was taken account when reporting the benchmark results. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2351/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2350/comments | https://api.github.com/repos/huggingface/transformers/issues/2350/events | https://github.com/huggingface/transformers/issues/2350 | 543,017,081 | MDU6SXNzdWU1NDMwMTcwODE= | 2,350 | Trouble fine tuning BERT language model | {
"login": "Buguemar",
"id": 18621888,
"node_id": "MDQ6VXNlcjE4NjIxODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/18621888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Buguemar",
"html_url": "https://github.com/Buguemar",
"followers_url": "https://api.github.com/users/Buguemar/followers",
"following_url": "https://api.github.com/users/Buguemar/following{/other_user}",
"gists_url": "https://api.github.com/users/Buguemar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Buguemar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Buguemar/subscriptions",
"organizations_url": "https://api.github.com/users/Buguemar/orgs",
"repos_url": "https://api.github.com/users/Buguemar/repos",
"events_url": "https://api.github.com/users/Buguemar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Buguemar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have the same question but no answer. In my case, I ran it in google colab and used easydict to deal with arg parser. \r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)\r\n 1836 .format(input.size(0), target.size(0)))\r\n 1837 if dim == 2:\r\n-> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\n 1839 elif dim == 4:\r\n 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\n\r\nRuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97",
"I faced this problem when I checkout the latest code, but it is worked when I checkout v2.3.0 version.",
"facing the same problem",
"works fine when i build from source ",
"I assume you installed the transformers with pip install, there is a bug in roberta, you can manually fix it by editing transformers/modeling_roberta.py file in Line 291 - https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py#L291\r\n\r\nChange:\r\n`loss_fct = CrossEntropyLoss(-1)`\r\nto \r\n`loss_fct = CrossEntropyLoss()`",
"Thank you guys! Specially to @orena1 ! \r\nChange the definition of the loss function (CrossEntropyLoss(-1)) worked for me. \r\n\r\nSorry for the late! but I'm very happy, I can finally fine-tune the model! hahaha"
] | 1,577 | 1,580 | 1,580 | NONE | null | ## 🐛 Bug
Hello, I'm having trouble running **run_lm_finetuning.py** script. I'm using pytorch 1.2, python 3.5, CUDA 9.2, Ubuntu 18.04.
When I run
```
$python run_lm_finetuning.py
--output_dir= my_output_dir/
--model_type=bert
--model_name_or_path=bert-base-uncased
--do_train
--train_data_file=$TRAIN_FILE
--do_eval
--eval_data_file=$TEST_FILE
--mlm
```
I obtain:
```
***** Running training *****
Num examples = 4517
Num Epochs = 1
Instantaneous batch size per GPU = 4
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 1
Total optimization steps = 1130
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last): | 0/1130 [00:00<?, ?it/s]
File "lm_fine_backup.py", line 712, in <module>
main()
File "lm_fine_backup.py", line 662, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "lm_fine_backup.py", line 298, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/mbugueno/.local/lib/python3.5/site-packages/transformers/modeling_bert.py", line 899, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/functional.py", line 2009, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/casapanshop/anaconda2/envs/py3/lib/python3.5/site-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97
```
I'm using the official example script in WikiText-2 dataset.
Any insights?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2350/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2350/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2349/comments | https://api.github.com/repos/huggingface/transformers/issues/2349/events | https://github.com/huggingface/transformers/pull/2349 | 543,000,485 | MDExOlB1bGxSZXF1ZXN0MzU3MzM3MDQ3 | 2,349 | Enforce target version for black. | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=h1) Report\n> Merging [#2349](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bfe870be654a1fc54c5479f9ad0875492d9cd959?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2349 +/- ##\n=======================================\n Coverage 73.32% 73.32% \n=======================================\n Files 87 87 \n Lines 14964 14964 \n=======================================\n Hits 10972 10972 \n Misses 3992 3992\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `89.9% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `93.93% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.34% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `36.76% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.2% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.1% <ø> (ø)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.31% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.3% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `35.71% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.84% <ø> (ø)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/2349/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=footer). Last update [bfe870b...238a778](https://codecov.io/gh/huggingface/transformers/pull/2349?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,578 | 1,578 | CONTRIBUTOR | null | This should stabilize formatting.
As suggested by @julien-c. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2349",
"html_url": "https://github.com/huggingface/transformers/pull/2349",
"diff_url": "https://github.com/huggingface/transformers/pull/2349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2349.patch",
"merged_at": 1578246735000
} |
https://api.github.com/repos/huggingface/transformers/issues/2348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2348/comments | https://api.github.com/repos/huggingface/transformers/issues/2348/events | https://github.com/huggingface/transformers/issues/2348 | 542,995,393 | MDU6SXNzdWU1NDI5OTUzOTM= | 2,348 | CamembertForQuestionAnswering | {
"login": "giuliorav",
"id": 33007031,
"node_id": "MDQ6VXNlcjMzMDA3MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/33007031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giuliorav",
"html_url": "https://github.com/giuliorav",
"followers_url": "https://api.github.com/users/giuliorav/followers",
"following_url": "https://api.github.com/users/giuliorav/following{/other_user}",
"gists_url": "https://api.github.com/users/giuliorav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giuliorav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giuliorav/subscriptions",
"organizations_url": "https://api.github.com/users/giuliorav/orgs",
"repos_url": "https://api.github.com/users/giuliorav/repos",
"events_url": "https://api.github.com/users/giuliorav/events{/privacy}",
"received_events_url": "https://api.github.com/users/giuliorav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | Hi,
is it possibile tu add _CamembertForQuestionAnswering_ class that extend _RobertaForQuestionAnswering_ into **src/transformers/modeling_camembert.py**, **src/transformers/__init__.py** and **examples/run_squad.py** ?
I had to manually force it in order to execute _run_squad.py_ with a CamemBERT-like network.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2348/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2347/comments | https://api.github.com/repos/huggingface/transformers/issues/2347/events | https://github.com/huggingface/transformers/pull/2347 | 542,969,141 | MDExOlB1bGxSZXF1ZXN0MzU3MzEwODky | 2,347 | revise T5 code to support one step decoding during generation | {
"login": "eelxpeng",
"id": 14260983,
"node_id": "MDQ6VXNlcjE0MjYwOTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/14260983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eelxpeng",
"html_url": "https://github.com/eelxpeng",
"followers_url": "https://api.github.com/users/eelxpeng/followers",
"following_url": "https://api.github.com/users/eelxpeng/following{/other_user}",
"gists_url": "https://api.github.com/users/eelxpeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eelxpeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eelxpeng/subscriptions",
"organizations_url": "https://api.github.com/users/eelxpeng/orgs",
"repos_url": "https://api.github.com/users/eelxpeng/repos",
"events_url": "https://api.github.com/users/eelxpeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/eelxpeng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | @thomwolf Hi, I'm new to contribute to this project. I revised the T5 code to support one step decoding during generation based on your implementation. Besides adding `decode_step` function, I also revised some others to pass in `cache` variable. Meanwhile, I added `bos_token` in tokenizer_t5 so that `bos_token` can be used as the first token during decoding. The example usage is as follows:
```
Examples:
encoder_hidden_states = model.encode(input_ids)
cache = model.init_state_from_encoder(encoder_hidden_states)
next_token = input_ids.new_full((batch_size, 1), tokenizer.bos_token_id)
generated = [next_token]
for i in range(100):
output, cache = model.decode_step(cache, input_ids=next_token)
next_token = torch.argmax(logits, dim=-1).unsqueeze(-1)
generated += [next_token]
generated = torch.cat(generated, dim=1).tolist()
```
Let me know whether it is useful and can be merged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2347/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2347",
"html_url": "https://github.com/huggingface/transformers/pull/2347",
"diff_url": "https://github.com/huggingface/transformers/pull/2347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2347.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2346/comments | https://api.github.com/repos/huggingface/transformers/issues/2346/events | https://github.com/huggingface/transformers/issues/2346 | 542,916,889 | MDU6SXNzdWU1NDI5MTY4ODk= | 2,346 | Why does the BertForQuestionAnswering sample code duplicate the [CLS] token? | {
"login": "jswift24",
"id": 1891204,
"node_id": "MDQ6VXNlcjE4OTEyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1891204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswift24",
"html_url": "https://github.com/jswift24",
"followers_url": "https://api.github.com/users/jswift24/followers",
"following_url": "https://api.github.com/users/jswift24/following{/other_user}",
"gists_url": "https://api.github.com/users/jswift24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswift24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswift24/subscriptions",
"organizations_url": "https://api.github.com/users/jswift24/orgs",
"repos_url": "https://api.github.com/users/jswift24/repos",
"events_url": "https://api.github.com/users/jswift24/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswift24/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, this is a mistake, thank you for raising an issue. It should have been fixed with 74755c89b92e0c0c027221c13fd034afed4d2136.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
The BertForQuestionAnswering sample code creates duplicate [CLS] tokens. Wondering why:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
# a nice puppet
tokenizer.decode(input_ids)
#'[CLS] [CLS] who was jim henson? [SEP] jim henson was a nice puppet [SEP] [SEP]'
```
If I remove the extra [CLS], the extraction doesn't work. It's exactly two tokens off:
```
input_ids = tokenizer.encode(input_text, add_special_tokens=False)
...rerun same code as above...
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
# was a
```
What am I doing wrong? How can I get the extraction working without duplicate [CLS] tokens? (and duplicate final [SEP] tokens BTW).
The sample code comes right from the docs: https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2345/comments | https://api.github.com/repos/huggingface/transformers/issues/2345/events | https://github.com/huggingface/transformers/issues/2345 | 542,913,668 | MDU6SXNzdWU1NDI5MTM2Njg= | 2,345 | Feature Request: Pipeline for Query/Document relevance | {
"login": "ArthurCamara",
"id": 709027,
"node_id": "MDQ6VXNlcjcwOTAyNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/709027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurCamara",
"html_url": "https://github.com/ArthurCamara",
"followers_url": "https://api.github.com/users/ArthurCamara/followers",
"following_url": "https://api.github.com/users/ArthurCamara/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurCamara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurCamara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurCamara/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurCamara/orgs",
"repos_url": "https://api.github.com/users/ArthurCamara/repos",
"events_url": "https://api.github.com/users/ArthurCamara/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurCamara/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale because this is very interesting",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,589 | 1,589 | NONE | null | # Pipelines for IR tasks
## Justification
In the last few years, a bunch of deep architectures were proposed for Ad-hoc retrieval, most with limited success (if any). However, BERT(et al)-based models are finally pushing the state of the art for Ad-hoc retrieval. In fact in the last TREC had a [Deep Learning track](https://microsoft.github.io/TREC-2019-Deep-Learning/) whre "NNLM" (neural network language models) [dominated](https://twitter.com/UnderdogGeek/status/1206595356017848324/photo/1) any other traditional (Mostly BM25 and variations) and other deep models.
So, it's a current trend that BERT should be the new baseline for any proposed model for IR.
## Description
There should be a [pipeline-like](https://github.com/huggingface/transformers#quick-tour-of-pipelines) feature that is able to score pairs of documents and user queries. Probably, pre-trained on a dataset like the [MSMarco dataset for TREC'19](https://microsoft.github.io/TREC-2019-Deep-Learning/). Ideally, this would also support a list of documents to rank and return scores.
In real-life applications, one would probably want to combine BERT scores with a traditional baseline scores (like QL or BM25). So, the score is needed (or, even better, combine something like [pyserini](https://github.com/castorini/pyserini) in the backend?)
## Usage
```
from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
nlp = pipeline('document-relevancy')
nlp({
'query 'can hives be a sign of pregnancy',
'context' '<document content>'
})
>>> {'score': 0.28756016668193496'}
```
I have already used DistilBERT on a paper to appear on ECIR2020 (Diagnosing BERT with Retrieval Heuristics), and would be able to contribute the model for this (even for bert-base).
I would also love to contribute with this, but will probably need some guidance, if anyone is willing to help.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2345/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2345/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2344/comments | https://api.github.com/repos/huggingface/transformers/issues/2344/events | https://github.com/huggingface/transformers/issues/2344 | 542,894,470 | MDU6SXNzdWU1NDI4OTQ0NzA= | 2,344 | How to run bert without checkpoints | {
"login": "calusbr",
"id": 25322394,
"node_id": "MDQ6VXNlcjI1MzIyMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/25322394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calusbr",
"html_url": "https://github.com/calusbr",
"followers_url": "https://api.github.com/users/calusbr/followers",
"following_url": "https://api.github.com/users/calusbr/following{/other_user}",
"gists_url": "https://api.github.com/users/calusbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calusbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calusbr/subscriptions",
"organizations_url": "https://api.github.com/users/calusbr/orgs",
"repos_url": "https://api.github.com/users/calusbr/repos",
"events_url": "https://api.github.com/users/calusbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/calusbr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | I would like to run BERT from scratch with no checkpoints for my language (PT-BR) and make a comparison with the multilingual model!
I am currently running BERT-Native provided by google to get checkpoints from scratch and then converting to pytorch. But it is a time consuming process!
can anybody help me? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2344/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2343/comments | https://api.github.com/repos/huggingface/transformers/issues/2343/events | https://github.com/huggingface/transformers/issues/2343 | 542,861,840 | MDU6SXNzdWU1NDI4NjE4NDA= | 2,343 | How to finetune PreTrainedEncoderDecoder | {
"login": "fengzhangyin",
"id": 33511257,
"node_id": "MDQ6VXNlcjMzNTExMjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33511257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fengzhangyin",
"html_url": "https://github.com/fengzhangyin",
"followers_url": "https://api.github.com/users/fengzhangyin/followers",
"following_url": "https://api.github.com/users/fengzhangyin/following{/other_user}",
"gists_url": "https://api.github.com/users/fengzhangyin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fengzhangyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fengzhangyin/subscriptions",
"organizations_url": "https://api.github.com/users/fengzhangyin/orgs",
"repos_url": "https://api.github.com/users/fengzhangyin/repos",
"events_url": "https://api.github.com/users/fengzhangyin/events{/privacy}",
"received_events_url": "https://api.github.com/users/fengzhangyin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\n\r\nThanks for the nice work. I have the same question.\r\nWould appreciate your reply. ",
"Yes , example would be nice",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
PreTrainedEncoderDecoder is great.
Now I have the following questions :
(1) How to use my data to finetune the PreTrainedEncoderDecoder?
(2) If I want to use the pretrained RoBERTa as encoder and deocder, what should I do ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2343/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2343/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2342/comments | https://api.github.com/repos/huggingface/transformers/issues/2342/events | https://github.com/huggingface/transformers/pull/2342 | 542,856,814 | MDExOlB1bGxSZXF1ZXN0MzU3MjE3NjM0 | 2,342 | Tokenizers as optional dependency | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,651 | 1,583 | MEMBER | null | - `tokenizers` as an optional dependency (`pip install -e .[fast]`)
- code formating with `make style` `make quality` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2342/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2342",
"html_url": "https://github.com/huggingface/transformers/pull/2342",
"diff_url": "https://github.com/huggingface/transformers/pull/2342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2342.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2341/comments | https://api.github.com/repos/huggingface/transformers/issues/2341/events | https://github.com/huggingface/transformers/issues/2341 | 542,764,020 | MDU6SXNzdWU1NDI3NjQwMjA= | 2,341 | "Reformer: The Efficient Transformer" looks awesome. I'd love to see it in the library. | {
"login": "Alan-Lee123",
"id": 11437976,
"node_id": "MDQ6VXNlcjExNDM3OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/11437976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alan-Lee123",
"html_url": "https://github.com/Alan-Lee123",
"followers_url": "https://api.github.com/users/Alan-Lee123/followers",
"following_url": "https://api.github.com/users/Alan-Lee123/following{/other_user}",
"gists_url": "https://api.github.com/users/Alan-Lee123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alan-Lee123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alan-Lee123/subscriptions",
"organizations_url": "https://api.github.com/users/Alan-Lee123/orgs",
"repos_url": "https://api.github.com/users/Alan-Lee123/repos",
"events_url": "https://api.github.com/users/Alan-Lee123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alan-Lee123/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have started to refactor the original source code in Pytorch if you'd like to help I'd greatly appreciate it! [https://github.com/zbloss/reformer](https://github.com/zbloss/reformer)",
"I have a working implementation at https://github.com/lucidrains/reformer-pytorch !",
"Any update on adding this to the library?",
"They published in their blog about it https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html?m=1",
"Hence my interest in a huggingface implementation :) ",
"Looking forward to see this model in the transformers lib :)",
"Don't think we should rush this one. The reformer paper is pretty tricky to implement in a clean way, plus there aren't any pre-trained models that use it yet. Just one person's opinion, though.",
"The implementation by @lucidrains seems to work https://github.com/lucidrains/reformer-pytorch ; it'd be cool if it was included in the transformers library. It seems strange to me that no pretrained Reformer has been uploaded since the paper was released, any ideas why? is it possible that it doesn't work in practice as stated by the authors in the paper? Anyone who has trained a Reformer on their own and have tried it to solve a real problem?\r\nThank you very much in advance",
"Same here, curious to know why. Thank you!",
"https://github.com/google/trax/blob/master/trax/models/reformer/machine_translation.ipynb\r\n\r\nThere should be an pretrained model now.\r\nWould be very happy to see Reformer model in this project.",
"+1",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closed by @patrickvonplaten ",
"has this been done ?"
] | 1,577 | 1,590 | 1,590 | NONE | null | # 🌟New model addition
## Model description
Efficient Transformer with locality-sensitive hashing and reversible layers
https://openreview.net/forum?id=rkgNKkHtvB
<!-- Important information -->
## Open Source status
* [. ] the model implementation is available: (give details)
There is an implementation from google
https://github.com/google/trax/blob/master/trax/models/research/reformer.py
* [ ] the model weights are available: (give details)
* [. ] who are the authors: (mention them)
Nikita Kitaev, Lukasz Kaiser, Anselm Levskaya
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2341/reactions",
"total_count": 78,
"+1": 62,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 16,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2341/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2340/comments | https://api.github.com/repos/huggingface/transformers/issues/2340/events | https://github.com/huggingface/transformers/issues/2340 | 542,752,109 | MDU6SXNzdWU1NDI3NTIxMDk= | 2,340 | Bert cross attention | {
"login": "cristipp",
"id": 615102,
"node_id": "MDQ6VXNlcjYxNTEwMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/615102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cristipp",
"html_url": "https://github.com/cristipp",
"followers_url": "https://api.github.com/users/cristipp/followers",
"following_url": "https://api.github.com/users/cristipp/following{/other_user}",
"gists_url": "https://api.github.com/users/cristipp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cristipp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cristipp/subscriptions",
"organizations_url": "https://api.github.com/users/cristipp/orgs",
"repos_url": "https://api.github.com/users/cristipp/repos",
"events_url": "https://api.github.com/users/cristipp/events{/privacy}",
"received_events_url": "https://api.github.com/users/cristipp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would imagine the idea is to incorporate a strong presence of the encoder hidden states - else the conditioning on the encoder might be weak. \r\n\r\nWe do an attention without the encoder hidden states anyways before the cross attention."
] | 1,577 | 1,581 | 1,579 | NONE | null | ## ❓ Questions & Help
In the standard Transformer/Bert architecture, what is the intuition behind cross attention doing a weighted average over the encoder_hidden_states? What happens if we set the value layer to decoder hidden_state instead?
See the cross attention value layer being set to the encoder_hidden_states at [modeling_bert.py#L238](https://github.com/huggingface/transformers/blob/8c67b529f615cc24c46864b8323d2d47a15ccd58/src/transformers/modeling_bert.py#L238), and the weighted average being taken at [modeling_bert.py#L266](https://github.com/huggingface/transformers/blob/8c67b529f615cc24c46864b8323d2d47a15ccd58/src/transformers/modeling_bert.py#L266)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2340/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2339/comments | https://api.github.com/repos/huggingface/transformers/issues/2339/events | https://github.com/huggingface/transformers/pull/2339 | 542,743,310 | MDExOlB1bGxSZXF1ZXN0MzU3MTI0NzY3 | 2,339 | read each lines, require less memory | {
"login": "knok",
"id": 1149984,
"node_id": "MDQ6VXNlcjExNDk5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1149984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knok",
"html_url": "https://github.com/knok",
"followers_url": "https://api.github.com/users/knok/followers",
"following_url": "https://api.github.com/users/knok/following{/other_user}",
"gists_url": "https://api.github.com/users/knok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knok/subscriptions",
"organizations_url": "https://api.github.com/users/knok/orgs",
"repos_url": "https://api.github.com/users/knok/repos",
"events_url": "https://api.github.com/users/knok/events{/privacy}",
"received_events_url": "https://api.github.com/users/knok/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=h1) Report\n> Merging [#2339](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/537a1de53d824b5851bce32cb5eafaef3f9ce5ef?src=pr&el=desc) will **increase** coverage by `1.11%`.\n> The diff coverage is `75.31%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2339 +/- ##\n=========================================\n+ Coverage 73.49% 74.6% +1.11% \n=========================================\n Files 87 87 \n Lines 14793 14802 +9 \n=========================================\n+ Hits 10872 11043 +171 \n+ Misses 3921 3759 -162\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.46% <ø> (ø)` | :arrow_up: |\n| [src/transformers/configuration\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21tYnQucHk=) | `55.55% <ø> (ø)` | :arrow_up: |\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0% <ø> (ø)` | :arrow_up: |\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `0% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `36.76% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `17.6% <0%> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `35.71% <0%> (ø)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `27.86% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.3% <0%> (ø)` | :arrow_up: |\n| ... and [72 more](https://codecov.io/gh/huggingface/transformers/pull/2339/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=footer). Last update [537a1de...9166b24](https://codecov.io/gh/huggingface/transformers/pull/2339?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"We recently merged a `LineByLineTextDataset` that should be equivalent:\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L124\r\n\r\nFeedback welcome."
] | 1,577 | 1,580 | 1,580 | NONE | null | The original code reads whole data at once, so it requires so much memory to handle huge corpus.
This change is:
* read corpus by each lines
* flatten 2-dimension array by itertools.chain, it requies less memory and fast | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2339/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2339",
"html_url": "https://github.com/huggingface/transformers/pull/2339",
"diff_url": "https://github.com/huggingface/transformers/pull/2339.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2339.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2338/comments | https://api.github.com/repos/huggingface/transformers/issues/2338/events | https://github.com/huggingface/transformers/issues/2338 | 542,695,809 | MDU6SXNzdWU1NDI2OTU4MDk= | 2,338 | Summarization ROGUE scores don't equal that of the paper ... | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | CONTRIBUTOR | null | ## ❓ Questions & Help
Just ran the `run_summarization.py` script, with the parameters specified [here](https://github.com/huggingface/transformers/tree/master/examples/summarization) and the ROGUE scores are far off from what is reported in the related paper.
The ROGUE scores reported in [PreSumm paper](https://github.com/nlpyang/PreSumm) (R1, R2, RL):
> BertSumExtAbs | 42.13 | 19.60 | 39.18
The ROGUE scores after running the HF script:
> ROGUE 1:
> F1 = .275
> Precision = .299
> Recall = .260
>
> ROGUE 2:
> F1 = .161
> Precision = .184
> Recall = .149
>
> ROGUE L:
> F1 = .305
> Precision = .326
> Recall = .290
The README file seems to suggest that running the script as is, with all the stories in a single directory, will give you ROGUE scores similar to that of the paper. That doesn't seem the case.
***Any ideas why? Or what I may be doing wrong here?***
Thanks much!
FYI ... ran the script as in the README:
```
python run_summarization.py \
--documents_dir $STORIES_DIR
--summaries_output_dir $OUTPUT_SUM_DIR
--no_cuda false \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true \
--compute_rouge true
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2338/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2337/comments | https://api.github.com/repos/huggingface/transformers/issues/2337/events | https://github.com/huggingface/transformers/issues/2337 | 542,670,080 | MDU6SXNzdWU1NDI2NzAwODA= | 2,337 | Dropout rates to be updated in all ALBERT v2 configs | {
"login": "matteodelv",
"id": 13894536,
"node_id": "MDQ6VXNlcjEzODk0NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/13894536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matteodelv",
"html_url": "https://github.com/matteodelv",
"followers_url": "https://api.github.com/users/matteodelv/followers",
"following_url": "https://api.github.com/users/matteodelv/following{/other_user}",
"gists_url": "https://api.github.com/users/matteodelv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matteodelv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matteodelv/subscriptions",
"organizations_url": "https://api.github.com/users/matteodelv/orgs",
"repos_url": "https://api.github.com/users/matteodelv/repos",
"events_url": "https://api.github.com/users/matteodelv/events{/privacy}",
"received_events_url": "https://api.github.com/users/matteodelv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I found it already been updated: https://s3.amazonaws.com/models.huggingface.co/bert/albert-xxlarge-v2-config.json\r\n\r\nDid I miss something?",
"Base, large and xlarge v2 configs have to be updated too, as confirmed by this [issue](https://github.com/google-research/ALBERT/issues/23) in the official Google Research repository. ",
"Thanks. Will pay attention to this one. Hope it will be fix soon.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,582 | 1,582 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): ALBERT v2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: run_squad.py
* [ ] my own modified scripts:
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD v2.0
* [ ] my own task or dataset:
## To Reproduce
Steps to reproduce the behavior:
1. Just choose one of the following default ALBERT v2 configs: base, large, xlarge
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
As stated in the updated ALBERT Tensorflow repo from Google Research, model v2 introduces no dropout at all for downstream tasks, like SQuAD, MRPC or COLA.
Following the discussing from this [issue](https://github.com/google-research/ALBERT/issues/23), model configurations on TF-Hub were wrong and thus, the ones used by transformers (loaded on Amazon S3).
<s>While configs on TF-Hub will be updated in the near future</s> As configs on TF-Hub have already been updated, transformers' ones should be updated too: parameters `attention_probs_dropout_prob` and `hidden_dropout_prob` should be both 0 for all v2 configs.
## Environment
* OS: Platform Linux-4.14.152-98.182.amzn1.x86_64-x86_64-with-glibc2.9
* Python version: Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
* PyTorch version: PyTorch 1.3.1
* PyTorch Transformers version (or branch):
* Using GPU ? YES, 4x NVIDIA V100 16GB
* Distributed of parallel setup ? No
* Any other relevant information: Using Amazon AWS Deep Learning Linux AMI 26.0
## Additional context
Hope this will be useful!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2337/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2336/comments | https://api.github.com/repos/huggingface/transformers/issues/2336/events | https://github.com/huggingface/transformers/issues/2336 | 542,644,580 | MDU6SXNzdWU1NDI2NDQ1ODA= | 2,336 | TypeError: Expected Operation, Variable, or Tensor, got None while saving tensorflow model | {
"login": "jonanem",
"id": 14140685,
"node_id": "MDQ6VXNlcjE0MTQwNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/14140685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonanem",
"html_url": "https://github.com/jonanem",
"followers_url": "https://api.github.com/users/jonanem/followers",
"following_url": "https://api.github.com/users/jonanem/following{/other_user}",
"gists_url": "https://api.github.com/users/jonanem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonanem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonanem/subscriptions",
"organizations_url": "https://api.github.com/users/jonanem/orgs",
"repos_url": "https://api.github.com/users/jonanem/repos",
"events_url": "https://api.github.com/users/jonanem/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonanem/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The training of the model is successful, but getting errors only while saving the model",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): TFAlbert
Language I am using the model on (English, Chinese....): English
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name): GLUE
* [ ] my own task or dataset: (give details)
## Environment
* OS: Linux
* Python version: Python 3.7.5 /
* Tensorflow version: 2.0.0
* Using GPU ? GPU
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux
- TensorFlow installed from (source or binary): Source
- TensorFlow version: '2.0.0'
- Python version: Python 3.7.5 /Conda
- CUDA/cuDNN version: cuda10.0_0/cudnn-7.6.5
- GPU model and memory: Tesla V100-PCIE / 32 GB memory
**Describe the current behavior**
**I am getting TypeError: Expected Operation, Variable, or Tensor, got None while saving the model using model.save('../output/my_model')**
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-49-5ab71d0ebc23> in <module>
----> 1 model.save('../output/my_model')
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options)
973 """
974 saving.save_model(self, filepath, overwrite, include_optimizer, save_format,
--> 975 signatures, options)
976
977 def save_weights(self, filepath, overwrite=True, save_format=None):
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)
113 else:
114 saved_model_save.save(model, filepath, overwrite, include_optimizer,
--> 115 signatures, options)
116
117
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options)
72 # default learning phase placeholder.
73 with K.learning_phase_scope(0):
---> 74 save_lib.save(model, filepath, signatures, options)
75
76 if not include_optimizer:
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/saved_model/save.py in save(obj, export_dir, signatures, options)
868 if signatures is None:
869 signatures = signature_serialization.find_function_to_export(
--> 870 checkpoint_graph_view)
871
872 signatures = signature_serialization.canonicalize_signatures(signatures)
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/saved_model/signature_serialization.py in find_function_to_export(saveable_view)
62 # If the user did not specify signatures, check the root object for a function
63 # that can be made into a signature.
---> 64 functions = saveable_view.list_functions(saveable_view.root)
65 signature = functions.get(DEFAULT_SIGNATURE_ATTR, None)
66 if signature is not None:
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/saved_model/save.py in list_functions(self, obj)
139 if obj_functions is None:
140 obj_functions = obj._list_functions_for_serialization( # pylint: disable=protected-access
--> 141 self._serialization_cache)
142 self._functions[obj] = obj_functions
143 return obj_functions
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache)
2420 def _list_functions_for_serialization(self, serialization_cache):
2421 return (self._trackable_saved_model_saver
-> 2422 .list_functions_for_serialization(serialization_cache))
2423
2424
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache)
89 `ConcreteFunction`.
90 """
---> 91 fns = self.functions_to_serialize(serialization_cache)
92
93 # The parent AutoTrackable class saves all user-defined tf.functions, and
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache)
77 def functions_to_serialize(self, serialization_cache):
78 return (self._get_serialized_attributes(
---> 79 serialization_cache).functions_to_serialize)
80
81 def _get_serialized_attributes(self, serialization_cache):
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache)
92
93 object_dict, function_dict = self._get_serialized_attributes_internal(
---> 94 serialization_cache)
95
96 serialized_attr.set_and_validate_objects(object_dict)
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
45 # cache (i.e. this is the root level object).
46 if len(serialization_cache[constants.KERAS_CACHE_KEY]) == 1:
---> 47 default_signature = save_impl.default_save_signature(self.obj)
48
49 # Other than the default signature function, all other attributes match with
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in default_save_signature(layer)
204 original_losses = _reset_layer_losses(layer)
205 fn = saving_utils.trace_model_call(layer)
--> 206 fn.get_concrete_function()
207 _restore_layer_losses(original_losses)
208 return fn
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
774 if self._stateful_fn is None:
775 initializer_map = object_identity.ObjectIdentityDictionary()
--> 776 self._initialize(args, kwargs, add_initializers_to=initializer_map)
777 self._initialize_uninitialized_variables(initializer_map)
778
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
406 self._concrete_stateful_fn = (
407 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 408 *args, **kwds))
409
410 def invalid_creator_scope(*unused_args, **unused_kwds):
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
1846 if self.input_signature:
1847 args, kwargs = None, None
-> 1848 graph_function, _, _ = self._maybe_define_function(args, kwargs)
1849 return graph_function
1850
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2148 graph_function = self._function_cache.primary.get(cache_key, None)
2149 if graph_function is None:
-> 2150 graph_function = self._create_graph_function(args, kwargs)
2151 self._function_cache.primary[cache_key] = graph_function
2152 return graph_function, args, kwargs
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2039 arg_names=arg_names,
2040 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2041 capture_by_value=self._capture_by_value),
2042 self._function_attributes,
2043 # Tell the ConcreteFunction to clean up its graph once it goes out of
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
913 converted_func)
914
--> 915 func_outputs = python_func(*func_args, **func_kwargs)
916
917 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
356 # __wrapped__ allows AutoGraph to swap in a converted function. We give
357 # the function a weak reference to itself to avoid a reference cycle.
--> 358 return weak_wrapped_fn().__wrapped__(*args, **kwds)
359 weak_wrapped_fn = weakref.ref(wrapped_fn)
360
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saving_utils.py in _wrapped_model(*args)
141 with base_layer_utils.call_context().enter(
142 model, inputs=inputs, build_graph=False, training=False, saving=True):
--> 143 outputs_list = nest.flatten(model(inputs=inputs, training=False))
144
145 try:
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
845 outputs = base_layer_utils.mark_as_return(outputs, acd)
846 else:
--> 847 outputs = call_fn(cast_inputs, *args, **kwargs)
848
849 except errors.OperatorNotAllowedInGraphError as e:
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/transformers/modeling_tf_albert.py in call(self, inputs, **kwargs)
783
784 def call(self, inputs, **kwargs):
--> 785 outputs = self.albert(inputs, **kwargs)
786
787 pooled_output = outputs[1]
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
845 outputs = base_layer_utils.mark_as_return(outputs, acd)
846 else:
--> 847 outputs = call_fn(cast_inputs, *args, **kwargs)
848
849 except errors.OperatorNotAllowedInGraphError as e:
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/transformers/modeling_tf_albert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, training)
680
681 embedding_output = self.embeddings(
--> 682 [input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
683 encoder_outputs = self.encoder(
684 [embedding_output, extended_attention_mask, head_mask], training=training)
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
889 with base_layer_utils.autocast_context_manager(
890 self._compute_dtype):
--> 891 outputs = self.call(cast_inputs, *args, **kwargs)
892 self._handle_activity_regularization(inputs, outputs)
893 self._set_mask_metadata(inputs, outputs, input_masks)
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in return_outputs_and_add_losses(*args, **kwargs)
55 inputs = args[inputs_arg_index]
56 args = args[inputs_arg_index + 1:]
---> 57 outputs, losses = fn(inputs, *args, **kwargs)
58 layer.add_loss(losses, inputs)
59 return outputs
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
109 training,
110 lambda: replace_training_and_call(True),
--> 111 lambda: replace_training_and_call(False))
112
113 # Create arg spec for decorated function. If 'training' is not defined in the
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in smart_cond(pred, true_fn, false_fn, name)
57 pred, true_fn=true_fn, false_fn=false_fn, name=name)
58 return smart_module.smart_cond(
---> 59 pred, true_fn=true_fn, false_fn=false_fn, name=name)
60
61
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
54 return true_fn()
55 else:
---> 56 return false_fn()
57 else:
58 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in <lambda>()
109 training,
110 lambda: replace_training_and_call(True),
--> 111 lambda: replace_training_and_call(False))
112
113 # Create arg spec for decorated function. If 'training' is not defined in the
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in replace_training_and_call(training)
104 def replace_training_and_call(training):
105 set_training_arg(training, training_arg_index, args, kwargs)
--> 106 return wrapped_call(*args, **kwargs)
107
108 return tf_utils.smart_cond(
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
531 if not self.call_collection.tracing:
532 self.call_collection.add_trace(*args, **kwargs)
--> 533 return super(LayerCall, self).__call__(*args, **kwargs)
534
535 def get_concrete_function(self, *args, **kwargs):
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
455
456 tracing_count = self._get_tracing_count()
--> 457 result = self._call(*args, **kwds)
458 if tracing_count == self._get_tracing_count():
459 self._call_counter.called_without_tracing()
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
492 # In this case we have not created variables on the first call. So we can
493 # run the first trace but we should fail if variables are created.
--> 494 results = self._stateful_fn(*args, **kwds)
495 if self._created_variables:
496 raise ValueError("Creating variables on a non-first call to a function"
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
1820 def __call__(self, *args, **kwargs):
1821 """Calls a graph function specialized to the inputs."""
-> 1822 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
1823 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
1824
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2148 graph_function = self._function_cache.primary.get(cache_key, None)
2149 if graph_function is None:
-> 2150 graph_function = self._create_graph_function(args, kwargs)
2151 self._function_cache.primary[cache_key] = graph_function
2152 return graph_function, args, kwargs
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2039 arg_names=arg_names,
2040 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2041 capture_by_value=self._capture_by_value),
2042 self._function_attributes,
2043 # Tell the ConcreteFunction to clean up its graph once it goes out of
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
913 converted_func)
914
--> 915 func_outputs = python_func(*func_args, **func_kwargs)
916
917 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
356 # __wrapped__ allows AutoGraph to swap in a converted function. We give
357 # the function a weak reference to itself to avoid a reference cycle.
--> 358 return weak_wrapped_fn().__wrapped__(*args, **kwds)
359 weak_wrapped_fn = weakref.ref(wrapped_fn)
360
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
513 layer, inputs=inputs, build_graph=False, training=training,
514 saving=True):
--> 515 ret = method(*args, **kwargs)
516 _restore_layer_losses(original_losses)
517 return ret
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
109 training,
110 lambda: replace_training_and_call(True),
--> 111 lambda: replace_training_and_call(False))
112
113 # Create arg spec for decorated function. If 'training' is not defined in the
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in smart_cond(pred, true_fn, false_fn, name)
57 pred, true_fn=true_fn, false_fn=false_fn, name=name)
58 return smart_module.smart_cond(
---> 59 pred, true_fn=true_fn, false_fn=false_fn, name=name)
60
61
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
54 return true_fn()
55 else:
---> 56 return false_fn()
57 else:
58 return control_flow_ops.cond(pred, true_fn=true_fn, false_fn=false_fn,
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in <lambda>()
109 training,
110 lambda: replace_training_and_call(True),
--> 111 lambda: replace_training_and_call(False))
112
113 # Create arg spec for decorated function. If 'training' is not defined in the
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in replace_training_and_call(training)
104 def replace_training_and_call(training):
105 set_training_arg(training, training_arg_index, args, kwargs)
--> 106 return wrapped_call(*args, **kwargs)
107
108 return tf_utils.smart_cond(
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(inputs, *args, **kwargs)
555 layer_call = _get_layer_call_method(layer)
556 def call_and_return_conditional_losses(inputs, *args, **kwargs):
--> 557 return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs)
558 return _create_call_fn_decorator(layer, call_and_return_conditional_losses)
559
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in get_losses_for(self, inputs)
1382 losses = [l for l in self.losses if not l._unconditional_loss]
1383 inputs = nest.flatten(inputs)
-> 1384 reachable = tf_utils.get_reachable_from_inputs(inputs, losses)
1385 return [l for l in losses if l in reachable]
1386
/app/AI_RD/conda/envs/cont_tag_sup/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py in get_reachable_from_inputs(inputs, targets)
132 outputs = x.consumers()
133 else:
--> 134 raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x))
135
136 for y in outputs:
TypeError: Expected Operation, Variable, or Tensor, got None
**Describe the expected behavior**
**Code to reproduce the issue**
`import tensorflow as tf`
`import pandas as pd`
`from sklearn.model_selection import train_test_split`
`import transformers`
`from transformers import AlbertConfig`
`from transformers import AlbertTokenizer`
`from transformers import TFAlbertForSequenceClassification`
`from transformers import glue_convert_examples_to_features`
`data_df = pd.read_excel("../input/test.xlsx")`
`model_dir = '../input/albert_xxlarge_v2/'`
`EPOCHS = 3`
`MAX_SEQ_LENGTH = 256`
`label_list = [0,1]`
`config = AlbertConfig.from_pretrained('albert-xxlarge-v2')`
`tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2', cache_dir=model_dir)`
`model = TFAlbertForSequenceClassification.from_pretrained('albert-xxlarge-v2', ``cache_dir=model_dir, config=config)`
`train_df, test_df = train_test_split(data_df[['id','text1', 'text2', 'LABEL']],
random_state=42, shuffle=True,
test_size=0.20, stratify=data_df['LABEL'])`
`train_InputExamples = train_df.apply(lambda x: InputExample(guid=x['id'],
text_a=x['text1'],
text_b=x['text2'],
label=x['LABEL']), axis=1)`
`train_dataset = glue_convert_examples_to_features(examples=train_InputExamples, tokenizer=tokenizer,
max_length=MAX_SEQ_LENGTH,
label_list = label_list, output_mode="classification")`
`optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08)`
`loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)`
`metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')`
`input_ids_train = []`
`attention_mask_train = []`
`token_type_ids_train = []`
`output_label_train = []`
`for f in train_dataset:`
`input_ids_train.append(f.input_ids)`
`attention_mask_train.append(f.attention_mask)`
`token_type_ids_train.append(f.token_type_ids)`
`output_label_train.append(f.label)`
`model.compile(optimizer=optimizer, loss=loss, metrics=[metric])`
`input_ids_train = np.array(input_ids_train)`
`attention_mask_train = np.array(attention_mask_train)`
`token_type_ids_train = np.array(token_type_ids_train)`
`output_label_train = np.array(output_label_train)`
`model.fit([input_ids_train,attention_mask_train, token_type_ids_train], y=output_label_train,
epochs = EPOCHS, batch_size=4)`
`model.save('../output/my_model')`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2335/comments | https://api.github.com/repos/huggingface/transformers/issues/2335/events | https://github.com/huggingface/transformers/issues/2335 | 542,633,932 | MDU6SXNzdWU1NDI2MzM5MzI= | 2,335 | XLNet and RoBERTa embeddings | {
"login": "abhikjha",
"id": 22162223,
"node_id": "MDQ6VXNlcjIyMTYyMjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/22162223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhikjha",
"html_url": "https://github.com/abhikjha",
"followers_url": "https://api.github.com/users/abhikjha/followers",
"following_url": "https://api.github.com/users/abhikjha/following{/other_user}",
"gists_url": "https://api.github.com/users/abhikjha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhikjha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhikjha/subscriptions",
"organizations_url": "https://api.github.com/users/abhikjha/orgs",
"repos_url": "https://api.github.com/users/abhikjha/repos",
"events_url": "https://api.github.com/users/abhikjha/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhikjha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | Referring to Jay Alammar's awesome blog post wherein he showed how to create sentence embeddings from BERT (DistilBert as well), can we use the workings he showed here for XLNet and RoBERTa models as well?
http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/
I was thinking majorly to use everything same what he did for RoBERTa assuming `<s>` token in RoBERTa contains the classification output and change following lines for XLNet since `<cls>` is at the end of the sequence length unlike in case of BERT where it is at the beginning
`features = last_hidden_states[0][:,-1,:].numpy()`
Any idea if my assumptions are correct? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2334/comments | https://api.github.com/repos/huggingface/transformers/issues/2334/events | https://github.com/huggingface/transformers/issues/2334 | 542,618,685 | MDU6SXNzdWU1NDI2MTg2ODU= | 2,334 | relativeattentionbias.weight in block 0 EncDecAttention of T5 Model not in original tf model. Where do we get it from? | {
"login": "swapnull7",
"id": 5597815,
"node_id": "MDQ6VXNlcjU1OTc4MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5597815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swapnull7",
"html_url": "https://github.com/swapnull7",
"followers_url": "https://api.github.com/users/swapnull7/followers",
"following_url": "https://api.github.com/users/swapnull7/following{/other_user}",
"gists_url": "https://api.github.com/users/swapnull7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swapnull7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swapnull7/subscriptions",
"organizations_url": "https://api.github.com/users/swapnull7/orgs",
"repos_url": "https://api.github.com/users/swapnull7/repos",
"events_url": "https://api.github.com/users/swapnull7/events{/privacy}",
"received_events_url": "https://api.github.com/users/swapnull7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It should also be in the TF version, this is the shared relative attention bias (shared among layers).\r\n\r\nDo you want to give more details on how you compared both lists of weights and what make you think it's missing?",
"Sure. By the way, when we say the TF version, I mean the weights released by Google. So for the TF weights, here's what I do for `T5-Small`:\r\n\r\nTF:\r\n``` \r\nimport tensorflow as tf\r\nimport pprint # to prettify prints \r\nvar_list = tf.train.list_variables(\"/path/to/stored/T5/weights\") # basically replicated directory from google cloud\r\npprint.pprint(var_list)\r\n```\r\nPytorch:\r\n```\r\nimport transformers\r\nimport pprint\r\nfrom transformers import T5Model\r\nmodel = T5Model.from_pretrained('t5-small')\r\npytorch_var_list = [x[0] for x in model.named_parameters()] # get names cause we only use them\r\npprint.pprint(pytorch_var_list)\r\n```\r\nTF output for the `small` version looks something like: \r\n```\r\n[('decoder/block_000/layer_000/SelfAttention/k', [512, 512]),\r\n ('decoder/block_000/layer_000/SelfAttention/k_slot_vc', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/k_slot_vr', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/o', [512, 512]),\r\n ('decoder/block_000/layer_000/SelfAttention/o_slot_vc', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/o_slot_vr', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/q', [512, 512]),\r\n ('decoder/block_000/layer_000/SelfAttention/q_slot_vc', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/q_slot_vr', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/relative_attention_bias', [8, 32]), \r\n ('decoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v',\r\n [8, 32]),\r\n ('decoder/block_000/layer_000/SelfAttention/v', [512, 512]),\r\n ('decoder/block_000/layer_000/SelfAttention/v_slot_vc', [512]),\r\n ('decoder/block_000/layer_000/SelfAttention/v_slot_vr', [512]),\r\n ('decoder/block_000/layer_000/layer_norm/scale', [512]),\r\n ('decoder/block_000/layer_000/layer_norm/scale_slot_v', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/k', [512, 512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/k_slot_vc', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/k_slot_vr', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/o', [512, 512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/o_slot_vc', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/o_slot_vr', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/q', [512, 512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/q_slot_vc', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/q_slot_vr', [512]), \r\n # --------------------------------- Note: No relative_attention_bias in layer_001\r\n ('decoder/block_000/layer_001/EncDecAttention/v', [512, 512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/v_slot_vc', [512]),\r\n ('decoder/block_000/layer_001/EncDecAttention/v_slot_vr', [512]),\r\n ('decoder/block_000/layer_001/layer_norm/scale', [512]),\r\n ('decoder/block_000/layer_001/layer_norm/scale_slot_v', [512]),\r\n ('decoder/block_000/layer_002/DenseReluDense/wi/kernel', [512, 2048]),\r\n ('decoder/block_000/layer_002/DenseReluDense/wi/kernel_slot_vc', [2048]),\r\n ('decoder/block_000/layer_002/DenseReluDense/wi/kernel_slot_vr', [512]),\r\n ('decoder/block_000/layer_002/DenseReluDense/wo/kernel', [2048, 512]),\r\n ('decoder/block_000/layer_002/DenseReluDense/wo/kernel_slot_vc', [2048]),\r\n ('decoder/block_000/layer_002/DenseReluDense/wo/kernel_slot_vr', [512]),\r\n ('decoder/block_000/layer_002/layer_norm/scale', [512]),\r\n ('decoder/block_000/layer_002/layer_norm/scale_slot_v', [512]),\r\n ('decoder/block_001/layer_000/SelfAttention/k', [512, 512]),\r\n ...\r\n ... # Similar weights for all the other decoder blocks \r\n ...\r\n ('decoder/block_005/layer_002/layer_norm/scale_slot_v', [512]),\r\n ('decoder/final_layer_norm/scale', [512]),\r\n ('decoder/final_layer_norm/scale_slot_v', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/k', [512, 512]),\r\n ('encoder/block_000/layer_000/SelfAttention/k_slot_vc', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/k_slot_vr', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/o', [512, 512]),\r\n ('encoder/block_000/layer_000/SelfAttention/o_slot_vc', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/o_slot_vr', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/q', [512, 512]),\r\n ('encoder/block_000/layer_000/SelfAttention/q_slot_vc', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/q_slot_vr', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/relative_attention_bias', [8, 32]),\r\n ('encoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v',\r\n [8, 32]),\r\n ('encoder/block_000/layer_000/SelfAttention/v', [512, 512]),\r\n ('encoder/block_000/layer_000/SelfAttention/v_slot_vc', [512]),\r\n ('encoder/block_000/layer_000/SelfAttention/v_slot_vr', [512]),\r\n ('encoder/block_000/layer_000/layer_norm/scale', [512]),\r\n ('encoder/block_000/layer_000/layer_norm/scale_slot_v', [512]),\r\n ('encoder/block_000/layer_001/DenseReluDense/wi/kernel', [512, 2048]),\r\n ('encoder/block_000/layer_001/DenseReluDense/wi/kernel_slot_vc', [2048]),\r\n ('encoder/block_000/layer_001/DenseReluDense/wi/kernel_slot_vr', [512]),\r\n ('encoder/block_000/layer_001/DenseReluDense/wo/kernel', [2048, 512]),\r\n ('encoder/block_000/layer_001/DenseReluDense/wo/kernel_slot_vc', [2048]),\r\n ('encoder/block_000/layer_001/DenseReluDense/wo/kernel_slot_vr', [512]),\r\n ('encoder/block_000/layer_001/layer_norm/scale', [512]),\r\n ('encoder/block_000/layer_001/layer_norm/scale_slot_v', [512]),\r\n ...\r\n ... # Similar weights for all the other encoder blocks \r\n ...\r\n ('encoder/block_005/layer_001/layer_norm/scale_slot_v', [512]),\r\n ('encoder/final_layer_norm/scale', [512]),\r\n ('encoder/final_layer_norm/scale_slot_v', [512]),\r\n ('global_step', []),\r\n ('shared/embedding', [32128, 512]),\r\n ('shared/embedding_slot_vc', [32128]),\r\n ('shared/embedding_slot_vr', [512])]```\r\n```\r\nPytorch output:\r\n```\r\n['shared.weight',\r\n 'encoder.block.0.layer.0.SelfAttention.q.weight',\r\n 'encoder.block.0.layer.0.SelfAttention.k.weight',\r\n 'encoder.block.0.layer.0.SelfAttention.v.weight',\r\n 'encoder.block.0.layer.0.SelfAttention.o.weight',\r\n 'encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight',\r\n 'encoder.block.0.layer.0.layer_norm.weight',\r\n 'encoder.block.0.layer.1.DenseReluDense.wi.weight',\r\n 'encoder.block.0.layer.1.DenseReluDense.wo.weight',\r\n 'encoder.block.0.layer.1.layer_norm.weight',\r\n 'encoder.block.1.layer.0.SelfAttention.q.weight',\r\n ...\r\n ... # Similar weights for all the other encoder blocks \r\n ...\r\n 'encoder.block.5.layer.1.layer_norm.weight',\r\n 'encoder.final_layer_norm.weight',\r\n 'decoder.block.0.layer.0.SelfAttention.q.weight',\r\n 'decoder.block.0.layer.0.SelfAttention.k.weight',\r\n 'decoder.block.0.layer.0.SelfAttention.v.weight',\r\n 'decoder.block.0.layer.0.SelfAttention.o.weight',\r\n 'decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight',\r\n 'decoder.block.0.layer.0.layer_norm.weight',\r\n 'decoder.block.0.layer.1.EncDecAttention.q.weight',\r\n 'decoder.block.0.layer.1.EncDecAttention.k.weight',\r\n 'decoder.block.0.layer.1.EncDecAttention.v.weight',\r\n 'decoder.block.0.layer.1.EncDecAttention.o.weight',\r\n 'decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight', -----> Where does this guy come from? am I missing something in the original weights?\r\n 'decoder.block.0.layer.1.layer_norm.weight',\r\n 'decoder.block.0.layer.2.DenseReluDense.wi.weight',\r\n 'decoder.block.0.layer.2.DenseReluDense.wo.weight',\r\n 'decoder.block.0.layer.2.layer_norm.weight',\r\n ...\r\n ... # Similar weights for all the other encoder blocks \r\n ...\r\n 'decoder.block.5.layer.2.layer_norm.weight',\r\n 'decoder.final_layer_norm.weight']\r\n```\r\n\r\nSorry, now that I think about it, I should have provided this information in the original post itself. So I was wondering where does the weight `decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight` come from cause I don't seem find it in the original tf weights file or am I missing something?",
"@thomwolf sorry for the push. Any update on this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@swapnull7 how did you solve this issue?",
"Hey @swapnull7 , it seems that that it was a mistake and T5 isn't supposed to have relative attention bias between encoder and decoder. \r\nIt has been removed in the new version of transformers.\r\nI don't know where the pretrained weights for it came from 🤔 \r\n\r\nhttps://github.com/huggingface/transformers/issues/8933#issuecomment-739251827\r\n"
] | 1,577 | 1,622 | 1,585 | NONE | null | ## ❓ Questions & Help
Hi, I was comparing the weights in original tf model and the pytorch t5 model and it looks like there is an extra embedding in the EncDecAttention layer (layer_1) in block_0 (relative_attention_bias.weight). I could find and compare the other embedding weights in the model but not this particular one. Was this new parameter randomly initialized using the initializer and stored as is for pre-trained model or was it fine-tuned somehow and stored?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2333/comments | https://api.github.com/repos/huggingface/transformers/issues/2333/events | https://github.com/huggingface/transformers/pull/2333 | 542,612,586 | MDExOlB1bGxSZXF1ZXN0MzU3MDE3NTc0 | 2,333 | Add 'keep_accents' flag to basic tokenizer | {
"login": "josecannete",
"id": 12201153,
"node_id": "MDQ6VXNlcjEyMjAxMTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/12201153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josecannete",
"html_url": "https://github.com/josecannete",
"followers_url": "https://api.github.com/users/josecannete/followers",
"following_url": "https://api.github.com/users/josecannete/following{/other_user}",
"gists_url": "https://api.github.com/users/josecannete/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josecannete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josecannete/subscriptions",
"organizations_url": "https://api.github.com/users/josecannete/orgs",
"repos_url": "https://api.github.com/users/josecannete/repos",
"events_url": "https://api.github.com/users/josecannete/events{/privacy}",
"received_events_url": "https://api.github.com/users/josecannete/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=h1) Report\n> Merging [#2333](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/77b0a385ffac5964030d08b1c3611b61370b1918?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2333 +/- ##\n==========================================\n+ Coverage 74.67% 74.67% +<.01% \n==========================================\n Files 87 87 \n Lines 14800 14802 +2 \n==========================================\n+ Hits 11052 11054 +2 \n Misses 3748 3748\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.64% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.41% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.26% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `88.35% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.12% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.83% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.82% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.6% <100%> (+0.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=footer). Last update [77b0a38...386a104](https://codecov.io/gh/huggingface/transformers/pull/2333?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | Hello!
Recently we released our Spanish Bert Model (https://github.com/dccuchile/beto) and we found problems with the tokenization for Spanish.
The problem relates to that the basic tokenizer convert the text to NFD.
For example:
```
text = "[CLS] compañera [SEP]"
tokenized_text = tokenizer.tokenize(text)
tokenized_text
['[CLS]', 'compa', '##ner', '##a', '[SEP]']
```
It changes *ñ* to *n*.
Another:
```
text = "[CLS] acción [SEP]"
tokenized_text = tokenizer.tokenize(text)
tokenized_text
['[CLS]', 'accion' ,'[SEP]']
```
It changes *ó* to *o*.
That behavior is not wanted for our Spanish model so in this PR I'm adding a flag to control that.
Waiting for your comments, thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2333/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2333",
"html_url": "https://github.com/huggingface/transformers/pull/2333",
"diff_url": "https://github.com/huggingface/transformers/pull/2333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2333.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2332/comments | https://api.github.com/repos/huggingface/transformers/issues/2332/events | https://github.com/huggingface/transformers/issues/2332 | 542,604,542 | MDU6SXNzdWU1NDI2MDQ1NDI= | 2,332 | What does 'output of the embeddings' mean? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The output of the embeddings is the sum of the token embeddings + the segment embeddings + the position embeddings. This value is the value that will be fed to the first layer of the transformer.",
"@LysandreJik \r\n\r\nHello,\r\n\r\nThank you very much for your reply.\r\n\r\nSo according to the Hugging Face Transformer documentation for the ```GPT2DoubleHeadsModel``` (under the 'output' section)\r\n\r\n```\r\nhidden_states: (optional, returned when config.output_hidden_states=True)\r\nlist of torch.FloatTensor (one for the output of each layer + the output of the embeddings) \r\n```\r\nSo in this case, would the first ```hidden_states``` tensor (index of 0) that is returned be the output of the embeddings, or would the very last ```hidden_states``` tensor that is returned be the output of the embeddings? \r\n\r\nI am confused about the order in which the ```hidden_states``` tensors are returned, because the documentation seem to indicate that the output of the embeddings is the last ```hidden_state``` tensor that is returned.\r\n\r\nThank you,",
"Indeed, the documentation might be misleading in that regard. The first value is the embedding output, every following value is the result of the preceding value being passed through an additional layer. I'll update the documentation shortly.",
"I remain confused by this and will be posting on the Disqus. "
] | 1,577 | 1,612 | 1,578 | NONE | null | Hello,
According to Hugging Face Transformers documentation, (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel)
the transformer's output ```hidden_state``` is defined as the following:
```
hidden_states: (optional, returned when config.output_hidden_states=True)
list of torch.FloatTensor (one for the output of each layer + the output of the embeddings)
of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model
at the output of each layer plus the initial embedding outputs.
```
I am a bit confused by the statement ```list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) ```. Does the ```output of the embeddings``` at the end of the statement refer to 'output of the uppermost output layer'?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2331/comments | https://api.github.com/repos/huggingface/transformers/issues/2331/events | https://github.com/huggingface/transformers/issues/2331 | 542,603,401 | MDU6SXNzdWU1NDI2MDM0MDE= | 2,331 | Learning Rate is not being updated by the Scheduler | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
".2f is not enough to represent learning rate",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | Hello,
Outside of the training function, I set:
```python
# define the hyperparameters for running the train function.
optimizer_ch2 = AdamW(model_ch2.parameters(), lr = lr, correct_bias = True)
scheduler_ch2 = get_linear_schedule_with_warmup(optimizer = optimizer_ch2,
num_warmup_steps = 200,
num_training_steps = 1000,
last_epoch = -1)
```
and here is my train function:
```python
def train_lm_head(model, train_iter, optimizer, scheduler, log_interval, pad_index):
# turn on a training mode
model.train()
# initialize total_loss to 0
total_loss = 0
for batch_index, batch in enumerate(train_iter):
input_ids = [instance for instance in batch.text]
## NOTE: Positions embeddings can be automatically created by the GPT2DoubleHeadsModel as (0, 1, ..., N)
# set the gradient back to 0 (necessary step)
optimizer.zero_grad()
# notice here that we are only placing lm_labels
# as mc_label is unnecessary for language modelling purpose.
lm_labels = [-1] + input_ids[:(len(input_ids)-1)]
lm_labels = torch.tensor([lm_labels], dtype=torch.long)
input_ids = torch.tensor([input_ids], dtype=torch.long)
output = model(input_ids, lm_labels = lm_labels)
loss = output[0]
# 'loss' here is the cross entropy.
# recall: 'input_ids' is defined above.
# calculate gradient by backwarding the loss
# calculate gradient of the loss w.r.t weights
loss.backward()
# clips norm of the gradient of an iterable of parameters.
# The norm is computed over all gradients together, as if they were
# concatenated into a single vector. Gradients are modified in-place.
# so basically just normalizes the weights and returns them.
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step() # update the weights by following the WarmupLinearSchedule for the lr.
scheduler.step() # update the learning rate
# update the with the calculated loss
total_loss = total_loss + loss
# python format: 's' for string, 'd' to display decimal integers (10-base), and 'f' for floats.
# ex: print("Sammy ate {0:.3f} percent of a pizza!".format(75.765367))
# >> Sammy ate 75.765 percent of a pizza!
# print("Sammy ate {0:f} percent of a {1}!".format(75, "pizza"))
# >> Sammy ate 75.000000 percent of a pizza!
#
# Below is good enough since we are doing the Stochastic Gradient Descent.
# (i.e. 1 batch = 1 sample)
if batch_index % log_interval == 0 and batch_index > 0:
print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} |'.format(
epoch, batch_index, len(train_iter), scheduler.get_lr()[0]))
total_loss = 0
```
and when I iterate the train function above for 5 epoch, I am getting the following output:
```{python}
# ...
| epoch 1 | 138/ 4957 batches | lr 0.00 |
| epoch 1 | 139/ 4957 batches | lr 0.00 |
| epoch 1 | 140/ 4957 batches | lr 0.00 |
| epoch 1 | 141/ 4957 batches | lr 0.00 |
| epoch 1 | 142/ 4957 batches | lr 0.00 |
| epoch 1 | 143/ 4957 batches | lr 0.00 |
| epoch 1 | 144/ 4957 batches | lr 0.00 |
| epoch 1 | 145/ 4957 batches | lr 0.00 |
| epoch 1 | 146/ 4957 batches | lr 0.00 |
| epoch 1 | 147/ 4957 batches | lr 0.00 |
| epoch 1 | 148/ 4957 batches | lr 0.00 |
| epoch 1 | 149/ 4957 batches | lr 0.00 |
| epoch 1 | 150/ 4957 batches | lr 0.00 |
| epoch 1 | 151/ 4957 batches | lr 0.00 |
| epoch 1 | 152/ 4957 batches | lr 0.00 |
#... list goes on
```
I am a bit concerned about this output because the learning rate does not seem to be changing, although I have specified in my train function ```scheduler.step()```, right underneath the ```optimizer.step()```.
What am I doing wrong here?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2331/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2330/comments | https://api.github.com/repos/huggingface/transformers/issues/2330/events | https://github.com/huggingface/transformers/issues/2330 | 542,601,912 | MDU6SXNzdWU1NDI2MDE5MTI= | 2,330 | BERT adapted to time series | {
"login": "jbechara",
"id": 13783727,
"node_id": "MDQ6VXNlcjEzNzgzNzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/13783727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbechara",
"html_url": "https://github.com/jbechara",
"followers_url": "https://api.github.com/users/jbechara/followers",
"following_url": "https://api.github.com/users/jbechara/following{/other_user}",
"gists_url": "https://api.github.com/users/jbechara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbechara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbechara/subscriptions",
"organizations_url": "https://api.github.com/users/jbechara/orgs",
"repos_url": "https://api.github.com/users/jbechara/repos",
"events_url": "https://api.github.com/users/jbechara/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbechara/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Can this issue be opened again? I recon there is a need to discuss this possibility",
"@jbechara / @MJimitater : Hello! \r\n\r\nI happened to stumbled upon this issue earlier this week. We have a paper (with code and a new dataset), which is to appear in ICASSP '21, where we propose to model multivariate times series dataset through BERT and GPT2. Please give it a try to see if it serves your purpose! \r\n\r\nPaper: https://arxiv.org/abs/2011.01843\r\nCode: https://github.com/IBM/TabFormer"
] | 1,577 | 1,612 | 1,583 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Is there a better way of modifying BERT to take time series as input (i.e. numerical data instead of text) than editing my local library to skip the word embedding? If not, what is the easiest way to do the latter?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2330/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2330/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2329/comments | https://api.github.com/repos/huggingface/transformers/issues/2329/events | https://github.com/huggingface/transformers/pull/2329 | 542,570,038 | MDExOlB1bGxSZXF1ZXN0MzU2OTgzMTYw | 2,329 | refactoring the code | {
"login": "gautam1858",
"id": 4949778,
"node_id": "MDQ6VXNlcjQ5NDk3Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4949778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gautam1858",
"html_url": "https://github.com/gautam1858",
"followers_url": "https://api.github.com/users/gautam1858/followers",
"following_url": "https://api.github.com/users/gautam1858/following{/other_user}",
"gists_url": "https://api.github.com/users/gautam1858/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gautam1858/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gautam1858/subscriptions",
"organizations_url": "https://api.github.com/users/gautam1858/orgs",
"repos_url": "https://api.github.com/users/gautam1858/repos",
"events_url": "https://api.github.com/users/gautam1858/events{/privacy}",
"received_events_url": "https://api.github.com/users/gautam1858/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We already have a `make style` command which automates formatting (with a setup that we chose).\r\n\r\nThanks for your contribution, closing this issue now."
] | 1,577 | 1,577 | 1,577 | NONE | null | code formatting, following PEP8 convention | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2329/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2329",
"html_url": "https://github.com/huggingface/transformers/pull/2329",
"diff_url": "https://github.com/huggingface/transformers/pull/2329.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2329.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2328/comments | https://api.github.com/repos/huggingface/transformers/issues/2328/events | https://github.com/huggingface/transformers/pull/2328 | 542,561,633 | MDExOlB1bGxSZXF1ZXN0MzU2OTc2Mjg3 | 2,328 | Refactoring the code | {
"login": "gautam1858",
"id": 4949778,
"node_id": "MDQ6VXNlcjQ5NDk3Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4949778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gautam1858",
"html_url": "https://github.com/gautam1858",
"followers_url": "https://api.github.com/users/gautam1858/followers",
"following_url": "https://api.github.com/users/gautam1858/following{/other_user}",
"gists_url": "https://api.github.com/users/gautam1858/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gautam1858/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gautam1858/subscriptions",
"organizations_url": "https://api.github.com/users/gautam1858/orgs",
"repos_url": "https://api.github.com/users/gautam1858/repos",
"events_url": "https://api.github.com/users/gautam1858/events{/privacy}",
"received_events_url": "https://api.github.com/users/gautam1858/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | NONE | null | Making the code formatting appropriate | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2328",
"html_url": "https://github.com/huggingface/transformers/pull/2328",
"diff_url": "https://github.com/huggingface/transformers/pull/2328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2328.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2327/comments | https://api.github.com/repos/huggingface/transformers/issues/2327/events | https://github.com/huggingface/transformers/issues/2327 | 542,558,785 | MDU6SXNzdWU1NDI1NTg3ODU= | 2,327 | load_and_cache_examples crashes on windows | {
"login": "yugant-git",
"id": 48283087,
"node_id": "MDQ6VXNlcjQ4MjgzMDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/48283087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yugant-git",
"html_url": "https://github.com/yugant-git",
"followers_url": "https://api.github.com/users/yugant-git/followers",
"following_url": "https://api.github.com/users/yugant-git/following{/other_user}",
"gists_url": "https://api.github.com/users/yugant-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yugant-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yugant-git/subscriptions",
"organizations_url": "https://api.github.com/users/yugant-git/orgs",
"repos_url": "https://api.github.com/users/yugant-git/repos",
"events_url": "https://api.github.com/users/yugant-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/yugant-git/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using ALBERT:
Language I am using the model on (English):
The problem arise when using:
[examples/run_squad.py] the official example scripts: (run evaluation for offline model)
It crashes in "load_and_cache_examples" for paths with windows format. It's due to split being done using ('/'). For windows, it needs to be ('\\\\')
The tasks I am working on is:
SQUaD
## To Reproduce
Steps to reproduce the behavior:
1. Get one model cached into a local folder by running Evaluation for SQUAD 2.0 using run_squad.py. This will load online model to the system
2. Run evaluation again with model_name_or_path needs to be a local relative path with "..\\" in it.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Windows
* Python version: Python 3.6
* PyTorch version: torch 1.3.1
* PyTorch Transformers version (or branch): Latest
## Additional context
Not needed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2327/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2326/comments | https://api.github.com/repos/huggingface/transformers/issues/2326/events | https://github.com/huggingface/transformers/issues/2326 | 542,550,271 | MDU6SXNzdWU1NDI1NTAyNzE= | 2,326 | run_generation.py gives TypeError when using xlnet due to empty dict being passed as token | {
"login": "nanne-aben",
"id": 47976799,
"node_id": "MDQ6VXNlcjQ3OTc2Nzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/47976799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nanne-aben",
"html_url": "https://github.com/nanne-aben",
"followers_url": "https://api.github.com/users/nanne-aben/followers",
"following_url": "https://api.github.com/users/nanne-aben/following{/other_user}",
"gists_url": "https://api.github.com/users/nanne-aben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nanne-aben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nanne-aben/subscriptions",
"organizations_url": "https://api.github.com/users/nanne-aben/orgs",
"repos_url": "https://api.github.com/users/nanne-aben/repos",
"events_url": "https://api.github.com/users/nanne-aben/events{/privacy}",
"received_events_url": "https://api.github.com/users/nanne-aben/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have the same problem, did you find any solutions?\r\n@nanne-aben ",
"No, not really. I removed the empty dictionary, which makes the code run,\nbut the generated text is just kinda bad. GPT2 (in which case {} is not\nadded) creates much better text. So I guess that the {} was added for a\nreason, but I can't figure out why. Also, when using\nhttps://transformer.huggingface.co/ XLNET seems to make good text, so it\ndoes seem that something is wrong with simply removing the {}.\n\nI'm not sure how to proceed though... Still interested in resolving this\nthough, so please let me know if you find anything! Would be happy to\ncontribute something here.\n\nOn Wed, Feb 5, 2020 at 2:46 PM pooya khandel <[email protected]>\nwrote:\n\n> I have the same problem, did you find any solutions?\n> @nanne-aben <https://github.com/nanne-aben>\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2326?email_source=notifications&email_token=ALOBCX4FHNIAMCNWJMQ3D4DRBK7KRA5CNFSM4J7LJOR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEK3PAYY#issuecomment-582414435>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALOBCXYOF27U4R54BY3WZILRBK7KRANCNFSM4J7LJORQ>\n> .\n>\n",
"Hi, thank you for opening this issue. I'm fixing this in #2749.",
"Awesome, thanks!\n\nOn Wed, 5 Feb 2020 at 22:19, Lysandre Debut <[email protected]>\nwrote:\n\n> Hi, thank you for opening this issue. I'm fixing this in #2749\n> <https://github.com/huggingface/transformers/pull/2749>.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2326?email_source=notifications&email_token=ALOBCX3PHAP4E6EUC6VB2SLRBMUNDA5CNFSM4J7LJOR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEK5ATRI#issuecomment-582617541>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALOBCX4HMZSVJHAHMQAQUSTRBMUNDANCNFSM4J7LJORQ>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,586 | 1,586 | NONE | null | ## 🐛 Bug
When I run
```
python run_generation.py --model_type=xlnet --model_name_or_path=xlnet-large-cased
```
I get the following error
```
Traceback (most recent call last):
File "run_generation.py", line 236, in <module>
main()
File "run_generation.py", line 214, in main
encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 820, in encode
**kwargs
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 912, in encode_plus
first_ids = get_input_ids(text)
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 904, in get_input_ids
return self.convert_tokens_to_ids(text)
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 751, in convert_tokens_to_ids
ids.append(self._convert_token_to_id_with_added_voc(token))
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 758, in _convert_token_to_id_with_added_voc
if token in self.added_tokens_encoder:
TypeError: unhashable type: 'dict'
```
I think the problem lies in the following code in run_generation.py:
```
def prepare_xlnet_input(args, _, tokenizer, prompt_text):
prompt_text = (args.padding_text if args.padding_text else PADDING_TEXT) + prompt_text
return prompt_text, {}
```
This returns a tuple of (string, dict). As this gets passed down to _convert_token_to_id_with_added_voc(), it will first try to check whether prompt_text is in self.added_tokens_encoder, and then whether {} is in self.added_tokens_encoder (which gives a TypeError, because you cannot check ```{} in list```.
I'm not yet sure where the empty dict is supposed to be used, so I can't fix it myself. Would be happy to contribute though.
## Important information
Model I am using (Bert, XLNet....): XLNet
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: run_generation.py
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: NA
* [ ] my own task or dataset: NA
## To Reproduce
Steps to reproduce the behavior:
1. cd examples
2. python run_generation.py --model_type=xlnet --model_name_or_path=xlnet-large-cased
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
Traceback (most recent call last):
File "run_generation.py", line 236, in <module>
main()
File "run_generation.py", line 214, in main
encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 820, in encode
**kwargs
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 912, in encode_plus
first_ids = get_input_ids(text)
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 904, in get_input_ids
return self.convert_tokens_to_ids(text)
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 751, in convert_tokens_to_ids
ids.append(self._convert_token_to_id_with_added_voc(token))
File "/Users/nanneaben/Documents/Projects/2019/intelmatch/src/transformers/transformers/src/transformers/tokenization_utils.py", line 758, in _convert_token_to_id_with_added_voc
if token in self.added_tokens_encoder:
TypeError: unhashable type: 'dict'
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Running without an error.
## Environment
* OS: Mac OS Mojave 10.14.4
* Python version: 3.7.3
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): current master (8c67b529f615cc24c46864b8323d2d47a15ccd58)
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information: NA
## Additional context
<!-- Add any other context about the problem here. -->
NA | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2326/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2325/comments | https://api.github.com/repos/huggingface/transformers/issues/2325/events | https://github.com/huggingface/transformers/issues/2325 | 542,542,936 | MDU6SXNzdWU1NDI1NDI5MzY= | 2,325 | How to make FP16 quantization on gpt/xl? | {
"login": "Archelunch",
"id": 10900176,
"node_id": "MDQ6VXNlcjEwOTAwMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10900176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Archelunch",
"html_url": "https://github.com/Archelunch",
"followers_url": "https://api.github.com/users/Archelunch/followers",
"following_url": "https://api.github.com/users/Archelunch/following{/other_user}",
"gists_url": "https://api.github.com/users/Archelunch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Archelunch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Archelunch/subscriptions",
"organizations_url": "https://api.github.com/users/Archelunch/orgs",
"repos_url": "https://api.github.com/users/Archelunch/repos",
"events_url": "https://api.github.com/users/Archelunch/events{/privacy}",
"received_events_url": "https://api.github.com/users/Archelunch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing this in favor of https://github.com/huggingface/tflite-android-transformers/issues/4"
] | 1,577 | 1,577 | 1,577 | NONE | null | How could I fix this error?
`ValueError: Message tensorflow.GraphDef exceeds maximum protobuf size of 2GB: 6234365906` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2325/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2324/comments | https://api.github.com/repos/huggingface/transformers/issues/2324/events | https://github.com/huggingface/transformers/pull/2324 | 542,527,329 | MDExOlB1bGxSZXF1ZXN0MzU2OTQ3NjU0 | 2,324 | Typo in serving.py | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=h1) Report\n> Merging [#2324](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2324 +/- ##\n=======================================\n Coverage 73.49% 73.49% \n=======================================\n Files 87 87 \n Lines 14793 14793 \n=======================================\n Hits 10872 10872 \n Misses 3921 3921\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/2324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=footer). Last update [aeef482...7211541](https://codecov.io/gh/huggingface/transformers/pull/2324?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2324/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2324",
"html_url": "https://github.com/huggingface/transformers/pull/2324",
"diff_url": "https://github.com/huggingface/transformers/pull/2324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2324.patch",
"merged_at": 1577360287000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2323/comments | https://api.github.com/repos/huggingface/transformers/issues/2323/events | https://github.com/huggingface/transformers/issues/2323 | 542,497,542 | MDU6SXNzdWU1NDI0OTc1NDI= | 2,323 | Where does the pre-trained bert model gets cached in my system by default? | {
"login": "13Ashu",
"id": 29479186,
"node_id": "MDQ6VXNlcjI5NDc5MTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/29479186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/13Ashu",
"html_url": "https://github.com/13Ashu",
"followers_url": "https://api.github.com/users/13Ashu/followers",
"following_url": "https://api.github.com/users/13Ashu/following{/other_user}",
"gists_url": "https://api.github.com/users/13Ashu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/13Ashu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/13Ashu/subscriptions",
"organizations_url": "https://api.github.com/users/13Ashu/orgs",
"repos_url": "https://api.github.com/users/13Ashu/repos",
"events_url": "https://api.github.com/users/13Ashu/events{/privacy}",
"received_events_url": "https://api.github.com/users/13Ashu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"AFAIK, the cache folder is hidden. You can download the files manually and the save them to your desired location two files to download is config.json and <model--name>.bin and you can call it through pretrained suppose you wanted to instantiate BERT then do `BertForMaskedLM.from_pretrained(Users/<Your location>/<your folder name>)`",
"Each file in the cache comes with a .json file describing what's inside.\r\n\r\n_This isn't part of transformers' public API and may change at any time in the future._\r\n\r\nAnyway, here's how you can locate a specific file:\r\n\r\n```\r\n$ cd ~/.cache/torch/transformers\r\n$ grep /bert-base-uncased *.json\r\n26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.json:{\"etag\": \"\\\"64800d5d8528ce344256daf115d4965e\\\"\", \"url\": \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt\"}\r\n4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.bf3b9ea126d8c0001ee8a1e8b92229871d06d36d8808208cc2449280da87785c.json:{\"etag\": \"\\\"74d4f96fdabdd865cbdbe905cd46c1f1\\\"\", \"url\": \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json\"}\r\nd667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5.json:{\"etag\": \"\\\"41a0e56472bad33498744818c8b1ef2c-64\\\"\", \"url\": \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5\"}\r\n```\r\n\r\nHere, `bert-base-uncased-tf_model.h5` is cached as `d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5`.",
"The discussion in #2157 could be useful too.",
"Hi!\r\nWhat if I use colab then how can I find the cash file? @aaugustin ",
"For anyone landed here wondering if one can globally change the cache directory: set `PYTORCH_TRANSFORMERS_CACHE` environment variable in shell before running the python interpreter.",
"You can get find it the same way transformers do it:\r\n\r\n from transformers.file_utils import hf_bucket_url, cached_path\r\n pretrained_model_name = 'DeepPavlov/rubert-base-cased'\r\n archive_file = hf_bucket_url(\r\n pretrained_model_name,\r\n filename='pytorch_model.bin',\r\n use_cdn=True,\r\n )\r\n resolved_archive_file = cached_path(archive_file)\r\n",
"For me huggingface changed the default cache folder to:\r\n```\r\n~/.cache/huggingface/transformers\r\n```",
"> You can get find it the same way transformers do it:\r\n> \r\n> ```\r\n> from transformers.file_utils import hf_bucket_url, cached_path\r\n> pretrained_model_name = 'DeepPavlov/rubert-base-cased'\r\n> archive_file = hf_bucket_url(\r\n> pretrained_model_name,\r\n> filename='pytorch_model.bin',\r\n> use_cdn=True,\r\n> )\r\n> resolved_archive_file = cached_path(archive_file)\r\n> ```\r\n\r\nThank you, this worked for me! \r\n\r\nNote that I had to remove the `use_cdn` option. Additionally, it does not seem to tell you where the `vocab.txt` and other files are located ",
"> > You can get find it the same way transformers do it:\r\n> > ```\r\n> > from transformers.file_utils import hf_bucket_url, cached_path\r\n> > pretrained_model_name = 'DeepPavlov/rubert-base-cased'\r\n> > archive_file = hf_bucket_url(\r\n> > pretrained_model_name,\r\n> > filename='pytorch_model.bin',\r\n> > use_cdn=True,\r\n> > )\r\n> > resolved_archive_file = cached_path(archive_file)\r\n> > ```\r\n> \r\n> Thank you, this worked for me!\r\n> \r\n> Note that I had to remove the `use_cdn` option. Additionally, it does not seem to tell you where the `vocab.txt` and other files are located\r\n\r\nNote that the hf_bucket_url has been removed so you can use this now. [ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' #22390](https://github.com/huggingface/transformers/issues/22390) "
] | 1,577 | 1,705 | 1,577 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I used model_class.from_pretrained('bert-base-uncased') to download and use the model. The next time when I use this command, it picks up the model from cache. But when I go into the cache, I see several files over 400M with large random names. How do I know which is the bert-base-uncased or distilbert-base-uncased model? Maybe I am looking at the wrong place | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2323/reactions",
"total_count": 12,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2323/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2322/comments | https://api.github.com/repos/huggingface/transformers/issues/2322/events | https://github.com/huggingface/transformers/issues/2322 | 542,469,582 | MDU6SXNzdWU1NDI0Njk1ODI= | 2,322 | I am getting repetitive output when running "python run_generation.py" | {
"login": "sunshinelala1991",
"id": 10409374,
"node_id": "MDQ6VXNlcjEwNDA5Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/10409374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunshinelala1991",
"html_url": "https://github.com/sunshinelala1991",
"followers_url": "https://api.github.com/users/sunshinelala1991/followers",
"following_url": "https://api.github.com/users/sunshinelala1991/following{/other_user}",
"gists_url": "https://api.github.com/users/sunshinelala1991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunshinelala1991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunshinelala1991/subscriptions",
"organizations_url": "https://api.github.com/users/sunshinelala1991/orgs",
"repos_url": "https://api.github.com/users/sunshinelala1991/repos",
"events_url": "https://api.github.com/users/sunshinelala1991/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunshinelala1991/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I guess you can tune the model for better results like selecting medium large gpt model changin temp and top - p to get different predictions. \r\nIf your new try using [write with transformer](https://transformer.huggingface.co/doc/gpt2-large) to get an idea about it.",
"You could add a `repetition_penalty`. Running \r\npython run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 --length=100 --repetition_penalty=1.2\r\n\r\nwould give:\r\n\r\n```\r\nModel prompt >>> nice to meet you\r\nnice to meet you.\r\nI'm sorry, but I don't know what's going on here.\" She said with a smile that made me feel like she\r\nwas trying hard to be nice and not mean or anything… \"You're just saying it because we've been \r\ntogether for so long…\" Her voice sounded very serious as if someone had asked her about the past \r\ncouple of days before they'd met up in person at all!\r\n```\r\n\r\nYou would need a very up-to-date version of transformers to make sure that the PR #2303 is included in your code to be sure that the `repetition_penalty` is working correctly.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,582 | 1,582 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Here is the command I used to run the code:
python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 --length 100
Here is the input and output I got:
Model prompt >>> nice to meet you
nice to meet you.
"I'm sorry, but I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet you. I'm not going to be able to meet!
I have tried different inputs but the output is always repeated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2322/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2321/comments | https://api.github.com/repos/huggingface/transformers/issues/2321/events | https://github.com/huggingface/transformers/issues/2321 | 542,455,784 | MDU6SXNzdWU1NDI0NTU3ODQ= | 2,321 | Bert Decoder using is_decoder and encoder_hidden_states | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, you're initializing a decoder but you're using it as an encoder. For the task you're showing here, you only need the encoder part, no need to initialize a decoder:\r\n\r\n```py\r\nmodel = BertForMaskedLM.from_pretrained('bert-base-uncased')\r\nmodel.eval()\r\n#\r\n# # Predict all tokens\r\nwith torch.no_grad():\r\n outputs = model(tokens_tensor, token_type_ids=segments_tensors)\r\n predictions = outputs[0]\r\n```\r\n\r\nYou can see an example of the Model2Model architecture (encoder-decoder) based on BERT in the [quickstart section of the documentation.](https://huggingface.co/transformers/quickstart.html#model2model-example)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @LysandreJik ,\r\nI intend to use Bert with a generative head. \r\nCan you give an example of using bert with is_decoder as True?"
] | 1,577 | 1,590 | 1,584 | NONE | null | ```
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = "[CLS] For an unfamiliar eye, the Porsche Cayenne and the Cayenne Coupe would look similar"
tokenized_text = tokenizer.tokenize(text)
# Mask a token that we will try to predict back with `BertForMaskedLM`
masked_index = 3
tokenized_text[masked_index] = '[MASK]'
print(tokenized_text)
# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
tokens = tokenizer.convert_ids_to_tokens(indexed_tokens)
string = tokenizer.convert_tokens_to_string(tokens)
# # Define sentence A and B indices associated to 1st and 2nd sentences (see paper)
segments_ids = [0 for x in range(len(tokenized_text))]
#
# # Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
#
model = BertForMaskedLM.from_pretrained('bert-base-uncased', is_decoder=True)
model.eval()
#
# # Predict all tokens
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors, tokens=tokenized_text, encoder_hidden_states=tokens_tensor)
predictions = outputs[0]
print('state_dict',len(model.state_dict()))
predicted_indices = []
# # confirm we were able to predict 'henson'
for i in range(len(tokenized_text)):
predicted_indices.append(torch.argmax(predictions[0, i]).item())
# predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens(predicted_indices)[0]
print('indexed_tokens', indexed_tokens)
print('predicted_indices', predicted_indices)
predicted_text = tokenizer.decode(predicted_indices)
print(predicted_text)
```
In `modeling_bert` it's mentioned
```
To behave as an decoder the model needs to be initialized with the
`is_decoder` argument of the configuration set to `True`; an
`encoder_hidden_states` is expected as an input to the forward pass.
```
So i did the same in my code but i get 2 error saying
`INFO:transformers.modeling_utils:Weights of BertForMaskedLM not initialized from pretrained model: ['bert.encoder.layer.0.crossattention.self.query.weight`
and
```
File "/Volumes/Data/transformers-master/transformers/modeling_bert.py", line 679, in forward
extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Bool
```
Am i missing something or is this the wrong way to configure bert decoder? In General, i'd like to know how encoder-decoder transformer work in BERT | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2321/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2320/comments | https://api.github.com/repos/huggingface/transformers/issues/2320/events | https://github.com/huggingface/transformers/issues/2320 | 542,420,530 | MDU6SXNzdWU1NDI0MjA1MzA= | 2,320 | how to do a simple multi-classifier by bert 2.0,training set ,and label set all lines | {
"login": "tongbc",
"id": 14326577,
"node_id": "MDQ6VXNlcjE0MzI2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/14326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tongbc",
"html_url": "https://github.com/tongbc",
"followers_url": "https://api.github.com/users/tongbc/followers",
"following_url": "https://api.github.com/users/tongbc/following{/other_user}",
"gists_url": "https://api.github.com/users/tongbc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tongbc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tongbc/subscriptions",
"organizations_url": "https://api.github.com/users/tongbc/orgs",
"repos_url": "https://api.github.com/users/tongbc/repos",
"events_url": "https://api.github.com/users/tongbc/events{/privacy}",
"received_events_url": "https://api.github.com/users/tongbc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | how to do a simple multi-classifier by bert 2.0,training set ,and label set all lines | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2320/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2319/comments | https://api.github.com/repos/huggingface/transformers/issues/2319/events | https://github.com/huggingface/transformers/issues/2319 | 542,418,432 | MDU6SXNzdWU1NDI0MTg0MzI= | 2,319 | help: couldn't find such vocabulary files at this path or url | {
"login": "WenxiongLiao",
"id": 25845940,
"node_id": "MDQ6VXNlcjI1ODQ1OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25845940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenxiongLiao",
"html_url": "https://github.com/WenxiongLiao",
"followers_url": "https://api.github.com/users/WenxiongLiao/followers",
"following_url": "https://api.github.com/users/WenxiongLiao/following{/other_user}",
"gists_url": "https://api.github.com/users/WenxiongLiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenxiongLiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenxiongLiao/subscriptions",
"organizations_url": "https://api.github.com/users/WenxiongLiao/orgs",
"repos_url": "https://api.github.com/users/WenxiongLiao/repos",
"events_url": "https://api.github.com/users/WenxiongLiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenxiongLiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you manage to solve your issue? (if you did, how?)"
] | 1,577 | 1,577 | 1,577 | NONE | null | I want to load the Chinese Roberta model of pre-trained. When I use RobertaModel.from_pretrained() to load pre-trained model, it can't work.
<img width="1108" alt="屏幕快照 2019-12-25 下午11 56 11" src="https://user-images.githubusercontent.com/25845940/71454755-21f98100-27cd-11ea-8d0d-37beed6cc235.png">
<img width="1058" alt="屏幕快照 2019-12-25 下午11 56 17" src="https://user-images.githubusercontent.com/25845940/71455892-02b12280-27d2-11ea-98d6-ac1b2bd45901.png">
The Chinese Roberta model is download from https://github.com/brightmart/roberta_zh
I am not sure if it is a problem with the pre-trained model or the transformers framework problem | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2319/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2318/comments | https://api.github.com/repos/huggingface/transformers/issues/2318/events | https://github.com/huggingface/transformers/issues/2318 | 542,406,032 | MDU6SXNzdWU1NDI0MDYwMzI= | 2,318 | How can I read my bert model by using transformers? | {
"login": "Little-Frog-233",
"id": 39156078,
"node_id": "MDQ6VXNlcjM5MTU2MDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/39156078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Little-Frog-233",
"html_url": "https://github.com/Little-Frog-233",
"followers_url": "https://api.github.com/users/Little-Frog-233/followers",
"following_url": "https://api.github.com/users/Little-Frog-233/following{/other_user}",
"gists_url": "https://api.github.com/users/Little-Frog-233/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Little-Frog-233/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Little-Frog-233/subscriptions",
"organizations_url": "https://api.github.com/users/Little-Frog-233/orgs",
"repos_url": "https://api.github.com/users/Little-Frog-233/repos",
"events_url": "https://api.github.com/users/Little-Frog-233/events{/privacy}",
"received_events_url": "https://api.github.com/users/Little-Frog-233/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not clearly sure what your question is but i guess u need to change\r\n`from pytorch_pretrained_bert import BertModel, BertTokenizer` to `from transformers import BertModel, BertTokenizer` \r\nDownload latest version if not-found-module error occurs...",
"thank u for your replay, I tryed from transformers import BertModel, BertTokenizer, but it reminds me that OSError: Model name 'model/chinese_L-12_H-768_A-12' was not found in model name list, does it means that i can't use my local model?\r\n\r\n\r\n\r\n\r\n------------------ 原始邮件 ------------------\r\n发件人: \"shashankMadan-designEsthetics\"<[email protected]>;\r\n发送时间: 2019年12月26日(星期四) 晚上7:48\r\n收件人: \"huggingface/transformers\"<[email protected]>;\r\n抄送: \"晓辰\"<[email protected]>;\"Author\"<[email protected]>;\r\n主题: Re: [huggingface/transformers] How can I read my bert model by using transformers? (#2318)\r\n\r\n\r\n\r\n\r\nNot clearly sure what your question is but i guess u need to change\r\n from pytorch_pretrained_bert import BertModel, BertTokenizer to from transformers import BertModel, BertTokenizer\r\n Download latest version if not-found-module error occurs...\r\n \r\n—\r\nYou are receiving this because you authored the thread.\r\nReply to this email directly, view it on GitHub, or unsubscribe.",
"Well You can and it should work. \r\nFirst Try using just `bert-uncased` to check if its working correctly or maybe with `BERT-Base, Chinese`. \r\nIf it says not found it may be some error from your local url so check if its the right folder location and name.\r\nThen if nothing works i guess you may need to finetune it.",
"i tryed and it works, but so slow, may because i'm in china, HAHA, thank you very much\r\n\r\n\r\n\r\n\r\n------------------ 原始邮件 ------------------\r\n发件人: \"shashankMadan-designEsthetics\"<[email protected]>;\r\n发送时间: 2019年12月26日(星期四) 晚上8:17\r\n收件人: \"huggingface/transformers\"<[email protected]>;\r\n抄送: \"晓辰\"<[email protected]>;\"Author\"<[email protected]>;\r\n主题: Re: [huggingface/transformers] How can I read my bert model by using transformers? (#2318)\r\n\r\n\r\n\r\n\r\nWell You can and it should work.\r\n First Try using just bert-uncased to check if its working correctly or maybe with BERT-Base, Chinese.\r\n If it says not found it may be some error from your local url so check if its the right folder location and name.\r\n Then if nothing works i guess you may need to finetune it.\r\n \r\n—\r\nYou are receiving this because you authored the thread.\r\nReply to this email directly, view it on GitHub, or unsubscribe.",
"Your welcome, Utilize cuda if u have gpus or try doing it on cloud. \r\nDo close the issue if it seems solved."
] | 1,577 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
when I use pytorch_pretrained_bert, i can read my model like this:
from pytorch_pretrained_bert import BertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained(bert_vocab_path)
bert = BertModel.from_pretrained(bert_model_path)
when i use transformers, how can i do this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2318/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2317/comments | https://api.github.com/repos/huggingface/transformers/issues/2317/events | https://github.com/huggingface/transformers/pull/2317 | 542,394,083 | MDExOlB1bGxSZXF1ZXN0MzU2ODQzMjgw | 2,317 | Fix beam search when sampling in language generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=h1) Report\n> Merging [#2317](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2317 +/- ##\n=======================================\n Coverage 73.49% 73.49% \n=======================================\n Files 87 87 \n Lines 14793 14793 \n=======================================\n Hits 10872 10872 \n Misses 3921 3921\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2317/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.45% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=footer). Last update [aeef482...af1ca72](https://codecov.io/gh/huggingface/transformers/pull/2317?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good point but we shouldn't be sampling like that indeed, we should be sampling independently for each beam."
] | 1,577 | 1,583 | 1,583 | MEMBER | null | I think there is a problem with beam search when setting `do_sample=True`
As it was implemented before, the variable `next_words` in previous line 829 would always contains
word ids < `vocab_size` which forces all `beam_idx` to always be == 0.
This way all words would actually always be appended to the `input_ids` of the first beam.
In the proposed PR, the words are sampled over the scores of size `(batch_size, num_beams * vocab_size)`, which is similar to what is done in greedy decoding.
I tried generating a couple of sequences with the proposed change and it seems to be important that the temperature is set relatively high (~1.5) to avoid repeating words.
Not 100% whether the proposed PR is the best fix. In general, beam search seems to work better when doing greedy decoding. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2317/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2317",
"html_url": "https://github.com/huggingface/transformers/pull/2317",
"diff_url": "https://github.com/huggingface/transformers/pull/2317.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2317.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2316/comments | https://api.github.com/repos/huggingface/transformers/issues/2316/events | https://github.com/huggingface/transformers/pull/2316 | 542,392,268 | MDExOlB1bGxSZXF1ZXN0MzU2ODQyMDk1 | 2,316 | Delete [dev] behind pip install -e . | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=h1) Report\n> Merging [#2316](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2316 +/- ##\n=======================================\n Coverage 73.49% 73.49% \n=======================================\n Files 87 87 \n Lines 14793 14793 \n=======================================\n Hits 10872 10872 \n Misses 3921 3921\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=footer). Last update [aeef482...73511e8](https://codecov.io/gh/huggingface/transformers/pull/2316?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"`[dev]` is there to install the development dependencies.\r\n\r\nWhat shell are you using?\r\n\r\nDoes `pip install -e \".[dev]\"` or `pip install -e .\\[dev\\]` work?\r\n\r\n\r\n\r\n",
"We're probably going to modify the syntax for shell scripts. When we do this, we should modify it throughout the repository, because there are a bunch of other instances of this.",
"I see!\r\nI was using the zsh shell. \r\n`pip install -e \".[dev]\"` and `pip install -e .\\[dev\\]` both work with zsh shell.\r\n\r\nWhen switching to the bash shell \r\n`pip install -e .[dev]` works as well.",
"I'm using zsh as well, but I must have enabled an option that makes the unquoted syntax work.\r\n\r\nI'm going to fix the instructions to prevent others from hitting the same problem.",
"Thanks for the report!"
] | 1,577 | 1,577 | 1,577 | MEMBER | null | I might be wrong here, but I think it should simply be
```bash
$ pip install -e .
```
without the [dev]
When executing
```bash
$ pip install -e .[dev]
```
in my terminal I get the error:
`no matches found: .[dev]` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2316/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2316",
"html_url": "https://github.com/huggingface/transformers/pull/2316",
"diff_url": "https://github.com/huggingface/transformers/pull/2316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2316.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2315/comments | https://api.github.com/repos/huggingface/transformers/issues/2315/events | https://github.com/huggingface/transformers/pull/2315 | 542,392,016 | MDExOlB1bGxSZXF1ZXN0MzU2ODQxOTEy | 2,315 | Add hint to install pytest-xdist | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=h1) Report\n> Merging [#2315](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2315 +/- ##\n=======================================\n Coverage 73.49% 73.49% \n=======================================\n Files 87 87 \n Lines 14793 14793 \n=======================================\n Hits 10872 10872 \n Misses 3921 3921\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=footer). Last update [aeef482...4c48701](https://codecov.io/gh/huggingface/transformers/pull/2315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"They're installed by `pip install -e .[dev]`. You don't have them because you modified that step. Let's discuss on #2316."
] | 1,577 | 1,577 | 1,577 | MEMBER | null | Just a small hint that pytest-xdist should be installed before running the make test step | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2315/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2315",
"html_url": "https://github.com/huggingface/transformers/pull/2315",
"diff_url": "https://github.com/huggingface/transformers/pull/2315.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2315.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2314/comments | https://api.github.com/repos/huggingface/transformers/issues/2314/events | https://github.com/huggingface/transformers/issues/2314 | 542,387,644 | MDU6SXNzdWU1NDIzODc2NDQ= | 2,314 | Is there a uncased gpt2? | {
"login": "cloudygoose",
"id": 1544039,
"node_id": "MDQ6VXNlcjE1NDQwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1544039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cloudygoose",
"html_url": "https://github.com/cloudygoose",
"followers_url": "https://api.github.com/users/cloudygoose/followers",
"following_url": "https://api.github.com/users/cloudygoose/following{/other_user}",
"gists_url": "https://api.github.com/users/cloudygoose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cloudygoose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cloudygoose/subscriptions",
"organizations_url": "https://api.github.com/users/cloudygoose/orgs",
"repos_url": "https://api.github.com/users/cloudygoose/repos",
"events_url": "https://api.github.com/users/cloudygoose/events{/privacy}",
"received_events_url": "https://api.github.com/users/cloudygoose/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, all the available models are listed in the [pretrained models section of the documentation](https://huggingface.co/transformers/pretrained_models.html). For GPT-2, there are four different models (`gpt2`, `gpt2-medium`, `gpt2-large`, `gpt2-xl`), which are all cased."
] | 1,577 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
Hi, thanks for everything. Quick question: Is there a pre-trained uncased gpt2, like bert-uncased? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2314/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2313/comments | https://api.github.com/repos/huggingface/transformers/issues/2313/events | https://github.com/huggingface/transformers/pull/2313 | 542,369,693 | MDExOlB1bGxSZXF1ZXN0MzU2ODI1OTM4 | 2,313 | Add dropout to WordpieceTokenizer and BPE | {
"login": "vitaliyradchenko",
"id": 13647822,
"node_id": "MDQ6VXNlcjEzNjQ3ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13647822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitaliyradchenko",
"html_url": "https://github.com/vitaliyradchenko",
"followers_url": "https://api.github.com/users/vitaliyradchenko/followers",
"following_url": "https://api.github.com/users/vitaliyradchenko/following{/other_user}",
"gists_url": "https://api.github.com/users/vitaliyradchenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitaliyradchenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitaliyradchenko/subscriptions",
"organizations_url": "https://api.github.com/users/vitaliyradchenko/orgs",
"repos_url": "https://api.github.com/users/vitaliyradchenko/repos",
"events_url": "https://api.github.com/users/vitaliyradchenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitaliyradchenko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=h1) Report\n> Merging [#2313](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `84%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2313 +/- ##\n==========================================\n- Coverage 73.54% 73.53% -0.01% \n==========================================\n Files 87 87 \n Lines 14789 14796 +7 \n==========================================\n+ Hits 10876 10880 +4 \n- Misses 3913 3916 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.58% <100%> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.13% <66.66%> (-1.22%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=footer). Last update [81db12c...7472a38](https://codecov.io/gh/huggingface/transformers/pull/2313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi i notice one possible issue in your code.\r\nyou use `random.random() > dropout`.\r\nHowever, according to `Docstring: random() -> x in the interval [0, 1).`\r\nSo even with droput=0, the bpe output is not deterministic.",
"Also, according to orig paper, dropout=1 should output char sequence.\r\nbut following unchanged code snippet will make it output the raw input\r\n```python3\r\nif not pairs:\r\n return token\r\n```",
"and `self.cache` should be updated iff droput=0",
"> Hi i notice one possible issue in your code.\r\n> you use `random.random() > dropout`.\r\n> However, according to `Docstring: random() -> x in the interval [0, 1).`\r\n> So even with droput=0, the bpe output is not deterministic.\r\n\r\nthanks, fixed",
"> Also, according to orig paper, dropout=1 should output char sequence.\r\n> but following unchanged code snippet will make it output the raw input\r\n> \r\n> ```python\r\n> if not pairs:\r\n> return token\r\n> ```\r\n\r\nfixed this too, thanks for pointing",
"> and `self.cache` should be updated iff droput=0\r\n\r\nalso fixed, thanks",
"> > Also, according to orig paper, dropout=1 should output char sequence.\r\n> > but following unchanged code snippet will make it output the raw input\r\n> > ```python\r\n> > if not pairs:\r\n> > return token\r\n> > ```\r\n> \r\n> fixed this too, thanks for pointing\r\n\r\nThis issue is not really fixed, the exact corner case is all merges are dropped at the beginning, not limited to dropout=1.\r\nI think the correct fix is replace\r\n```python\r\n if dropout != 1:\r\n pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks]\r\n else:\r\n # we should merge space byte with first token char\r\n new_word = []\r\n token_index = 0\r\n while token_index < len(token):\r\n if token[token_index] != self.byte_encoder[32]:\r\n new_word.append(token[token_index])\r\n token_index += 1\r\n else:\r\n new_word.append(token[token_index : token_index + 2])\r\n token_index += 2\r\n\r\n return \" \".join(new_word)\r\n\r\n\r\n if not pairs:\r\n return token\r\n\r\n\r\n while True:\r\n```\r\nwith\r\n```python\r\n pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks]\r\n\r\n while pairs:\r\n```",
"> > > Also, according to orig paper, dropout=1 should output char sequence.\r\n> > > but following unchanged code snippet will make it output the raw input\r\n> > > ```python\r\n> > > if not pairs:\r\n> > > return token\r\n> > > ```\r\n> > \r\n> > \r\n> > fixed this too, thanks for pointing\r\n> \r\n> This issue is not really fixed, the exact corner case is all merges are dropped at the beginning, not limited to dropout=1.\r\n> I think the correct fix is replace\r\n> \r\n> ```python\r\n> if dropout != 1:\r\n> pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks]\r\n> else:\r\n> # we should merge space byte with first token char\r\n> new_word = []\r\n> token_index = 0\r\n> while token_index < len(token):\r\n> if token[token_index] != self.byte_encoder[32]:\r\n> new_word.append(token[token_index])\r\n> token_index += 1\r\n> else:\r\n> new_word.append(token[token_index : token_index + 2])\r\n> token_index += 2\r\n> \r\n> return \" \".join(new_word)\r\n> \r\n> \r\n> if not pairs:\r\n> return token\r\n> \r\n> \r\n> while True:\r\n> ```\r\n> \r\n> with\r\n> \r\n> ```python\r\n> pairs = [pair for pair in get_pairs(word) if random.random() >= dropout and pair in self.bpe_ranks]\r\n> \r\n> while pairs:\r\n> ```\r\n\r\nunderstood your point. simplified code, thanks",
"I have one advice that replace the usage of `random.random() >= dropout` with `dropout == 0 or dropout < 1 and dropout <= random.random()`, utilizing the short-circuit operator to prevent consuming unnecessary random number. Otherwise, this side effects may cause existing result rely on `random` unrepeatable.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@vitaliyradchenko good idea to add this feature. are you planning to add the suggestion of @boy2000-007man https://github.com/huggingface/transformers/pull/2313#issuecomment-573192073 and re-fresh this PR so that it will finally get merged?"
] | 1,577 | 1,597 | 1,584 | CONTRIBUTOR | null | We can add dropout not only to model weights but and to a tokenizer. The paper, proposed by Ivan Provilkov (2019, https://arxiv.org/pdf/1910.13267.pdf), describes all benefits from this approach and shows that it's almost always better to use dropout during tokenization. (use only for training, for inference dropout should be equal to 0)
Example:
```
import transformers
tokenizer = transformers.RobertaTokenizer.from_pretrained("roberta-base")
tokenizer.tokenize("Dropout is very important")
# default value is 0
# ['Drop', 'out', 'Ġis', 'Ġvery', 'Ġimportant']
tokenizer.tokenize("Dropout is very important", dropout=0.1)
# ['Drop', 'out', 'Ġis', 'Ġvery', 'Ġimport', 'ant']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2313/reactions",
"total_count": 12,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2313/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2313",
"html_url": "https://github.com/huggingface/transformers/pull/2313",
"diff_url": "https://github.com/huggingface/transformers/pull/2313.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2313.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.